WorldWideScience

Sample records for input space regularization

  1. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  2. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  3. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  4. Regularization methods in Banach spaces

    CERN Document Server

    Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S

    2012-01-01

    Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B

  5. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  6. q-Space Upsampling Using x-q Space Regularization.

    Science.gov (United States)

    Chen, Geng; Dong, Bin; Zhang, Yong; Shen, Dinggang; Yap, Pew-Thian

    2017-09-01

    Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q -space is possible. In this paper, we show how non-local self-similar information in the x - q space of diffusion MRI data can be harnessed for q -space upsampling. More specifically, we establish the relationships between signal measurements in x - q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q -space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).

  7. Discretizing LTI Descriptor (Regular Differential Input Systems with Consistent Initial Conditions

    Directory of Open Access Journals (Sweden)

    Athanasios D. Karageorgos

    2010-01-01

    Full Text Available A technique for discretizing efficiently the solution of a Linear descriptor (regular differential input system with consistent initial conditions, and Time-Invariant coefficients (LTI is introduced and fully discussed. Additionally, an upper bound for the error ‖x¯(kT−x¯k‖ that derives from the procedure of discretization is also provided. Practically speaking, we are interested in such kind of systems, since they are inherent in many physical, economical and engineering phenomena.

  8. Regular perturbations in a vector space with indefinite metric

    International Nuclear Information System (INIS)

    Chiang, C.C.

    1975-08-01

    The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)

  9. Regular Generalized Star Star closed sets in Bitopological Spaces

    OpenAIRE

    K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar

    2011-01-01

    The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.

  10. Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces

    NARCIS (Netherlands)

    F. Vallentin (Frank)

    2008-01-01

    htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite

  11. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  12. Characterization of Input Current Interharmonics in Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz

    2017-01-01

    This paper investigates the interharmonic generation process in the input current of double-stage Adjustable Speed Drives (ASDs) based on voltage source inverters and front-end diode rectifiers. The effects of the inverter output-side low order harmonics, caused by implementing the double......-edge symmetrical regularly sampled Space Vector Modulation (SVM) technique, on the input current interharmonic components are presented and discussed. Particular attention is also given to the influence of the asymmetrical regularly sampled modulation technique on the drive input current interharmonics....... The developed theoretical analysis predicts the drive interharmonic frequency locations with respect to the selected sampling strategies. Simulation and experimental results on a 2.5 kW ASD system verify the effectiveness of the theoretical analysis....

  13. Representations of space based on haptic input

    NARCIS (Netherlands)

    Zuidhoek, S.

    2005-01-01

    The present thesis focused on the representations of grasping space based on haptic input. We aimed at identifying their characteristics, and the underlying neurocognitive processes and mechanisms. To this end, we studied the systematic distortions in performance on several orientation perception

  14. Restrictive metric regularity and generalized differential calculus in Banach spaces

    Directory of Open Access Journals (Sweden)

    Bingwu Wang

    2004-10-01

    Full Text Available We consider nonlinear mappings f:X→Y between Banach spaces and study the notion of restrictive metric regularity of f around some point x¯, that is, metric regularity of f from X into the metric space E=f(X. Some sufficient as well as necessary and sufficient conditions for restrictive metric regularity are obtained, which particularly include an extension of the classical Lyusternik-Graves theorem in the case when f is strictly differentiable at x¯ but its strict derivative ∇f(x¯ is not surjective. We develop applications of the results obtained and some other techniques in variational analysis to generalized differential calculus involving normal cones to nonsmooth and nonconvex sets, coderivatives of set-valued mappings, as well as first-order and second-order subdifferentials of extended real-valued functions.

  15. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  16. Regularity and predictability of human mobility in personal space.

    Directory of Open Access Journals (Sweden)

    Daniel Austin

    Full Text Available Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.

  17. Manifold-splitting regularization, self-linking, twisting, writhing numbers of space-time ribbons

    International Nuclear Information System (INIS)

    Tze, C.H.

    1988-01-01

    The authors present an alternative formulation of Polyakov's regularization of Gauss' integral formula for a single closed Feynman path. A key element in his proof of the D = 3 fermi-bose transmutations induced by topological gauge fields, this regularization is linked here with the existence and properties of a nontrivial topological invariant for a closed space ribbon. This self-linking coefficient, an integer, is the sum of two differential characteristics of the ribbon, its twisting and writhing numbers. These invariants form the basis for a physical interpretation of our regularization. Their connection to Polyakov's spinorization is discussed. The authors further generalize their construction to the self-linking, twisting and writhing of higher dimensional d = eta(odd) submanifolds in D = (2eta + 1) space-time

  18. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  19. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  20. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; Huang, Jianhua Z.; Sun, Yijun; Gao, Xin

    2014-01-01

    by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant

  1. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Kai Johannes

    2010-04-15

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  2. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    International Nuclear Information System (INIS)

    Keller, Kai Johannes

    2010-04-01

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  3. Muscle synergies in neuroscience and robotics: from input-space to task-space perspectives

    Directory of Open Access Journals (Sweden)

    Cristiano eAlessandro

    2013-04-01

    Full Text Available In this paper we review the works related to muscle synergies that have been carried-out in neuroscience and control engineering. In particular, we refer to the hypothesis that the central nervous system (CNS generates desired muscle contractions by combining a small number of predefined modules, called muscle synergies. We provide an overview of the methods that have been employed to test the validity of this scheme, and we show how the concept of muscle synergy has been generalized for the control of artificial agents. The comparison between these two lines of research, in particular their different goals and approaches, is instrumental to explain the computational implications of the hypothesized modular organization. Moreover, it clarifies the importance of assessing the functional role of muscle synergies: although these basic modules are defined at the level of muscle activations (input-space, they should result in the effective accomplishment of the desired task. This requirement is not always explicitly considered in experimental neuroscience, as muscle synergies are often estimated solely by analyzing recorded muscle activities. We suggest that synergy extraction methods should explicitly take into account task execution variables, thus moving from a perspective purely based on input-space to one grounded on task-space as well.

  4. On rarely generalized regular fuzzy continuous functions in fuzzy topological spaces

    Directory of Open Access Journals (Sweden)

    Appachi Vadivel

    2016-11-01

    Full Text Available In this paper, we introduce the concept of rarely generalized regular fuzzy continuous functions in the sense of A.P. Sostak's and Ramadan is introduced. Some interesting properties and characterizations of them are investigated. Also, some applications to fuzzy compact spaces are established.

  5. On the necessary conditions of the regular minimum of the scale factor of the co-moving space

    International Nuclear Information System (INIS)

    Agakov, V.G.

    1980-01-01

    In the framework of homogeneous cosmologic model studied is the behaviour of the comoving space element volume filled with barotropous medium, deprived of energy fluxes. Presented are the necessary conditions at which a regular final minimum of the scale factor of the co-mowing space may take place. It is found that to carry out the above minimum at values of cosmological constant Λ <= 0 the presence of two from three anisotropy factors is necessary. Anisotropy of space deformation should be one of these factors. In case of Λ <= 0 the regular minimum is also possible if all three factors of anisotropy are equal to zero. However if none of the factors of Fsub(i), Asub(ik) anisotropy is equal to zero, the presence of deformation space anisotropy is necessary for final regular minimum appearance

  6. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Neutrino stress tensor regularization in two-dimensional space-time

    International Nuclear Information System (INIS)

    Davies, P.C.W.; Unruh, W.G.

    1977-01-01

    The method of covariant point-splitting is used to regularize the stress tensor for a massless spin 1/2 (neutrino) quantum field in an arbitrary two-dimensional space-time. A thermodynamic argument is used as a consistency check. The result shows that the physical part of the stress tensor is identical with that of the massless scalar field (in the absence of Casimir-type terms) even though the formally divergent expression is equal to the negative of the scalar case. (author)

  8. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    Science.gov (United States)

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-05-02

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Time-Homogeneous Parabolic Wick-Anderson Model in One Space Dimension: Regularity of Solution

    OpenAIRE

    Kim, Hyun-Jung; Lototsky, Sergey V

    2017-01-01

    Even though the heat equation with random potential is a well-studied object, the particular case of time-independent Gaussian white noise in one space dimension has yet to receive the attention it deserves. The paper investigates the stochastic heat equation with space-only Gaussian white noise on a bounded interval. The main result is that the space-time regularity of the solution is the same for additive noise and for multiplicative noise in the Wick-It\\^o-Skorokhod interpretation.

  10. Space Vector Modulation for an Indirect Matrix Converter with Improved Input Power Factor

    Directory of Open Access Journals (Sweden)

    Nguyen Dinh Tuyen

    2017-04-01

    Full Text Available Pulse width modulation strategies have been developed for indirect matrix converters (IMCs in order to improve their performance. In indirect matrix converters, the LC input filter is used to remove input current harmonics and electromagnetic interference problems. Unfortunately, due to the existence of the input filter, the input power factor is diminished, especially during operation at low voltage outputs. In this paper, a new space vector modulation (SVM is proposed to compensate for the input power factor of the indirect matrix converter. Both computer simulation and experimental studies through hardware implementation were performed to verify the effectiveness of the proposed modulation strategy.

  11. Regularization in Hilbert space under unbounded operators and general source conditions

    International Nuclear Information System (INIS)

    Hofmann, Bernd; Mathé, Peter; Von Weizsäcker, Heinrich

    2009-01-01

    The authors study ill-posed equations with unbounded operators in Hilbert space. This setup has important applications, but only a few theoretical studies are available. First, the question is addressed and answered whether every element satisfies some general source condition with respect to a given self-adjoint unbounded operator. This generalizes a previous result from Mathé and Hofmann (2008 Inverse Problems 24 015009). The analysis then proceeds to error bounds for regularization, emphasizing some specific points for regularization under unbounded operators. The study finally reviews two examples within the light of the present study, as these are fractional differentiation and some Cauchy problems for the Helmholtz equation, both studied previously and in more detail by U Tautenhahn and co-authors

  12. A remark on partial linear spaces of girth 5 with an application to strongly regular graphs

    NARCIS (Netherlands)

    Brouwer, A.E.; Neumaier, A.

    1988-01-01

    We derive a lower bound on the number of points of a partial linear space of girth 5. As an application, certain strongly regular graphs with=2 are ruled out by observing that the first subconstituents are partial linear spaces.

  13. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  14. Effects of modulation techniques on the input current interharmonics of Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz

    2018-01-01

    operation of the grid. This paper presents the effect of the symmetrical regularly sampled Space Vector Modulation (SVM) and Discontinuous Pulse Width Modulation-30olag (DPWM2) techniques, as the most popular modulation methods in the ASD applications, on the drive’s input current interharmonic magnitudes....... Further investigations are also devoted to the cases, where the Random Modulation (RM) technique is applied on the selected modulation strategies. The comparative results show that how different modulation techniques can influence the ASD’s input current interharmonics and consequently may...

  15. A function space framework for structural total variation regularization with applications in inverse problems

    Science.gov (United States)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  16. Transferring Instantly the State of Higher-Order Linear Descriptor (Regular Differential Systems Using Impulsive Inputs

    Directory of Open Access Journals (Sweden)

    Athanasios D. Karageorgos

    2009-01-01

    Full Text Available In many applications, and generally speaking in many dynamical differential systems, the problem of transferring the initial state of the system to a desired state in (almost zero-time time is desirable but difficult to achieve. Theoretically, this can be achieved by using a linear combination of Dirac -function and its derivatives. Obviously, such an input is physically unrealizable. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration. In this paper, the approximation process of the distributional behaviour of higher-order linear descriptor (regular differential systems is presented. Thus, new analytical formulae based on linear algebra methods and generalized inverses theory are provided. Our approach is quite general and some significant conditions are derived. Finally, a numerical example is presented and discussed.

  17. On Landweber–Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces

    International Nuclear Information System (INIS)

    Leitão, A; Alves, M Marques

    2012-01-01

    In this paper, iterative regularization methods of Landweber–Kaczmarz type are considered for solving systems of ill-posed equations modeled (finitely many) by operators acting between Banach spaces. Using assumptions of uniform convexity and smoothness on the parameter space, we are able to prove a monotony result for the proposed method, as well as to establish convergence (for exact data) and stability results (in the noisy data case). (paper)

  18. Salt-body Inversion with Minimum Gradient Support and Sobolev Space Norm Regularizations

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Full-waveform inversion (FWI) is a technique which solves the ill-posed seismic inversion problem of fitting our model data to the measured ones from the field. FWI is capable of providing high-resolution estimates of the model, and of handling wave propagation of arbitrary complexity (visco-elastic, anisotropic); yet, it often fails to retrieve high-contrast geological structures, such as salt. One of the reasons for the FWI failure is that the updates at earlier iterations are too smooth to capture the sharp edges of the salt boundary. We compare several regularization approaches, which promote sharpness of the edges. Minimum gradient support (MGS) regularization focuses the inversion on blocky models, even more than the total variation (TV) does. However, both approaches try to invert undesirable high wavenumbers in the model too early for a model of complex structure. Therefore, we apply the Sobolev space norm as a regularizing term in order to maintain a balance between sharp and smooth updates in FWI. We demonstrate the application of these regularizations on a Marmousi model, enriched by a chunk of salt. The model turns out to be too complex in some parts to retrieve its full velocity distribution, yet the salt shape and contrast are retrieved.

  19. Critical phenomena of regular black holes in anti-de Sitter space-time

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Zhong-Ying [Peking University, Center for High Energy Physics, Beijing (China)

    2017-04-15

    In General Relativity, addressing coupling to a non-linear electromagnetic field, together with a negative cosmological constant, we obtain the general static spherical symmetric black hole solution with magnetic charges, which is asymptotic to anti-de Sitter (AdS) space-times. In particular, for a degenerate case the solution becomes a Hayward-AdS black hole, which is regular everywhere in the full space-time. The existence of such a regular black hole solution preserves the weak energy condition, while the strong energy condition is violated. We then derive the first law and the Smarr formula of the black hole solution. We further discuss its thermodynamic properties and study the critical phenomena in the extended phase space where the cosmological constant is treated as a thermodynamic variable as well as the parameter associated with the non-linear electrodynamics. We obtain many interesting results such as: the Maxwell equal area law in the P-V (or S-T) diagram is violated and consequently the critical point (T{sub *},P{sub *}) of the first order small-large black hole transition does not coincide with the inflection point (T{sub c},P{sub c}) of the isotherms; the Clapeyron equation describing the coexistence curve of the Van der Waals (vdW) fluid is no longer valid; the heat capacity at constant pressure is finite at the critical point; the various exponents near the critical point are also different from those of the vdW fluid. (orig.)

  20. Parameter choice in Banach space regularization under variational inequalities

    International Nuclear Information System (INIS)

    Hofmann, Bernd; Mathé, Peter

    2012-01-01

    The authors study parameter choice strategies for the Tikhonov regularization of nonlinear ill-posed problems in Banach spaces. The effectiveness of any parameter choice for obtaining convergence rates depends on the interplay of the solution smoothness and the nonlinearity structure, and it can be expressed concisely in terms of variational inequalities. Such inequalities are link conditions between the penalty term, the norm misfit and the corresponding error measure. The parameter choices under consideration include an a priori choice, the discrepancy principle as well as the Lepskii principle. For the convenience of the reader, the authors review in an appendix a few instances where the validity of a variational inequality can be established. (paper)

  1. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  2. UNIVERSAL REGULAR AUTONOMOUS ASYNCHRONOUS SYSTEMS: ω-LIMIT SETS, INVARIANCE AND BASINS OF ATTRACTION

    Directory of Open Access Journals (Sweden)

    Serban Vlad

    2011-07-01

    Full Text Available The asynchronous systems are the non-deterministic real timebinarymodels of the asynchronous circuits from electrical engineering.Autonomy means that the circuits and their models have no input.Regularity means analogies with the dynamical systems, thus such systems may be considered to be real time dynamical systems with a’vector field’, Universality refers to the case when the state space of the system is the greatest possible in the sense of theinclusion. The purpose of this paper is that of defining, by analogy with the dynamical systems theory, the omega-limit sets, the invariance and the basins of attraction of the universal regular autonomous asynchronous systems.

  3. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  4. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  5. A Novel Coupled State/Input/Parameter Identification Method for Linear Structural Systems

    Directory of Open Access Journals (Sweden)

    Zhimin Wan

    2018-01-01

    Full Text Available In many engineering applications, unknown states, inputs, and parameters exist in the structures. However, most methods require one or two of these variables to be known in order to identify the other(s. Recently, the authors have proposed a method called EGDF for coupled state/input/parameter identification for nonlinear system in state space. However, the EGDF method based solely on acceleration measurements is found to be unstable, which can cause the drift of the identified inputs and displacements. Although some regularization methods can be adopted for solving the problem, they are not suitable for joint input-state identification in real time. In this paper, a strategy of data fusion of displacement and acceleration measurements is used to avoid the low-frequency drift in the identified inputs and structural displacements for linear structural systems. Two numerical examples about a plane truss and a single-stage isolation system are conducted to verify the effectiveness of the proposed modified EGDF algorithm.

  6. A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

    KAUST Repository

    Babuška, Ivo; Nobile, Fabio; Tempone, Raul

    2010-01-01

    This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.

  7. Preventing Out-of-Sequence for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes an out-of-sequence (OOS) preventative cell dispatching algorithm, the multicast flow-based round robin (MFRR), for multicast input-queued space-memory-memory (IQ-SMM) Clos-network architecture. Independently treating each incoming cell, such as the desynchronized static round...

  8. Regularization and renormalization of quantum field theory in curved space-time

    International Nuclear Information System (INIS)

    Bernard, C.; Duncan, A.

    1977-01-01

    It is proposed that field theories quantized in a curved space-time manifold can be conveniently regularized and renormalized with the aid of Pauli-Villars regulator fields. The method avoids the conceptual difficulties of covariant point-separation approaches, by starting always from a manifestly generally covariant action, and the technical limitations of the dimensional reqularization approach, which requires solution of the theory in arbitrary dimension in order to go beyond a weak-field expansion. An action is constructed which renormalizes the weak-field perturbation theory of a massive scalar field in two space-time dimensions--it is shown that the trace anomaly previously found in dimensional regularization and some point-separation calculations also arises in perturbation theory when the theory is Pauli-Villars regulated. One then studies a specific solvable two-dimensional model of a massive scalar field in a Robertson-Walker asymptotically flat universe. It is shown that the action previously considered leads, in this model, to a well defined finite expectation value for the stress-energy tensor. The particle production (less than 0 in/vertical bar/theta/sup mu nu/(x,t)/vertical bar/0 in greater than for t → + infinity) is computed explicitly. Finally, the validity of weak-field perturbation theory (in the appropriate range of parameters) is checked directly in the solvable model, and the trace anomaly computed in the asymptotic regions t→ +- infinity independently of any weak field approximation. The extension of the model to higher dimensions and the renormalization of interacting (scalar) field theories are briefly discussed

  9. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  10. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  11. Regularizing Unpredictable Variation: Evidence from a Natural Language Setting

    Science.gov (United States)

    Hendricks, Alison Eisel; Miller, Karen; Jackson, Carrie N.

    2018-01-01

    While previous sociolinguistic research has demonstrated that children faithfully acquire probabilistic input constrained by sociolinguistic and linguistic factors (e.g., gender and socioeconomic status), research suggests children regularize inconsistent input-probabilistic input that is not sociolinguistically constrained (e.g., Hudson Kam &…

  12. Out-of-Sequence Prevention for Multicast Input-Queuing Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah; Berger, Michael Stübert

    2011-01-01

    This paper proposes two cell dispatching algorithms for the input-queuing space-memory-memory (IQ-SMM) Closnetwork to reduce out-of-sequence (OOS) for multicast traffic. The frequent connection pattern change of DSRR results in a severe OOS problem. Based on the principle of DSRR, MFDSRR is able ...

  13. The Accuracy of Remapping Irregularly Spaced Velocity Data onto a Regular Grid and the Computation of Vorticity

    National Research Council Canada - National Science Library

    Cohn, R

    1998-01-01

    .... This technique may be viewed as the molecular counterpart of PIV. To take advantage of standard data processing techniques, the MTV data need to be remapped onto a regular grid with a uniform spacing...

  14. The Accuracy of Remapping Irregularly Spaced Velocity Data onto a Regular Grid and the Computation of Vorticity

    National Research Council Canada - National Science Library

    Cohn, Richard

    1999-01-01

    .... This technique may be viewed as the molecular counterpart of PIV. To take advantage of standard data processing techniques, the MTV data need to be remapped onto a regular grid with a uniform spacing...

  15. Regularity of difference equations on Banach spaces

    CERN Document Server

    Agarwal, Ravi P; Lizama, Carlos

    2014-01-01

    This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.

  16. On RC-spaces

    OpenAIRE

    Bielas, Wojciech; Plewik, Szymon

    2018-01-01

    Following Frink's characterization of completely regular spaces, we say that a regular T_1-space is an RC-space whenever the family of all regular open sets constitutes a regular normal base. Normal spaces are RC-spaces and there exist completely regular spaces which are not RC-spaces. So the question arises, which of the known examples of completely regular and not normal spaces are RC-spaces. We show that the Niemytzki plane and the Sorgenfrey plane are RC-spaces.

  17. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  18. Summary of astronaut inputs on automation and robotics for Space Station Freedom

    Science.gov (United States)

    Weeks, David J.

    1990-01-01

    Astronauts and payload specialists present specific recommendations in the form of an overview that relate to the use of automation and robotics on the Space Station Freedom. The inputs are based on on-orbit operations experience, time requirements for crews, and similar crew-specific knowledge that address the impacts of automation and robotics on productivity. Interview techniques and specific questionnaire results are listed, and the majority of the responses indicate that incorporating automation and robotics to some extent and with human backup can improve productivity. Specific support is found for the use of advanced automation and EVA robotics on the Space Station Freedom and for the use of advanced automation on ground-based stations. Ground-based control of in-flight robotics is required, and Space Station activities and crew tasks should be analyzed to assess the systems engineering approach for incorporating automation and robotics.

  19. Regularity criterion for solutions to the Navier Stokes equations in the whole 3D space based on two vorticity components

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Kučera, P.; Skalák, Zdeněk

    2018-01-01

    Roč. 458, č. 1 (2018), s. 755-766 ISSN 0022-247X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985874 Keywords : Navier Stokes equations * conditional regularity * regularity criteria * vorticity * Besov spaces * bony decomposition Subject RIV: BA - General Mathematics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.064, year: 2016

  20. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  1. Out-of-Sequence Preventative Cell Dispatching for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes two out-of-sequence (OOS) preventative cell dispatching algorithms for the multicast input-queued space-memory-memory (IQ-SMM) Clos-network switch architecture, i.e. the multicast flow-based DSRR (MF-DSRR) and the multicast flow-based round-robin (MFRR). Treating each cell...

  2. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shi, E-mail: sjin@wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States); Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240 (China); Lu, Hanqing, E-mail: hanqing@math.wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States)

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (in the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.

  3. The Involvement of Endogenous Neural Oscillations in the Processing of Rhythmic Input: More Than a Regular Repetition of Evoked Neural Responses

    Science.gov (United States)

    Zoefel, Benedikt; ten Oever, Sanne; Sack, Alexander T.

    2018-01-01

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature. PMID:29563860

  4. Self-calibration for lab-μCT using space-time regularized projection-based DVC and model reduction

    Science.gov (United States)

    Jailin, C.; Buljac, A.; Bouterf, A.; Poncelet, M.; Hild, F.; Roux, S.

    2018-02-01

    An online calibration procedure for x-ray lab-CT is developed using projection-based digital volume correlation. An initial reconstruction of the sample is positioned in the 3D space for every angle so that its projection matches the initial one. This procedure allows a space-time displacement field to be estimated for the scanned sample, which is regularized with (i) rigid body motions in space and (ii) modal time shape functions computed using model reduction techniques (i.e. proper generalized decomposition). The result is an accurate identification of the position of the sample adapted for each angle, which may deviate from the desired perfect rotation required for standard reconstructions. An application of this procedure to a 4D in situ mechanical test is shown. The proposed correction leads to a much improved tomographic reconstruction quality.

  5. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  6. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  7. On the regularity of mild solutions to complete higher order differential equations on Banach spaces

    Directory of Open Access Journals (Sweden)

    Nezam Iraniparast

    2015-09-01

    Full Text Available For the complete higher order differential equation u(n(t=Σk=0n-1Aku(k(t+f(t, t∈ R (* on a Banach space E, we give a new definition of mild solutions of (*. We then characterize the regular admissibility of a translation invariant subspace al M of BUC(R, E with respect to (* in terms of solvability of the operator equation Σj=0n-1AjXal Dj-Xal Dn = C. As application, almost periodicity of mild solutions of (* is proved.

  8. Analysis of Logic Programs Using Regular Tree Languages

    DEFF Research Database (Denmark)

    Gallagher, John Patrick

    2012-01-01

    The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...... to the analysis is a program and a tree automaton, and the output is an abstract model of the program. These two contrasting abstract interpretations can be used in a wide range of analysis and verication problems....

  9. The existence and regularity of time-periodic solutions to the three-dimensional Navier–Stokes equations in the whole space

    International Nuclear Information System (INIS)

    Kyed, Mads

    2014-01-01

    The existence, uniqueness and regularity of time-periodic solutions to the Navier–Stokes equations in the three-dimensional whole space are investigated. We consider the Navier–Stokes equations with a non-zero drift term corresponding to the physical model of a fluid flow around a body that moves with a non-zero constant velocity. The existence of a strong time-periodic solution is shown for small time-periodic data. It is further shown that this solution is unique in a large class of weak solutions that can be considered physically reasonable. Finally, we establish regularity properties for any strong solution regardless of its size. (paper)

  10. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  11. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing

  12. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  13. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  14. The effects of shiftwork on human performance and its implications for regulating crew rest and duty restrictions during commercial space flight

    Science.gov (United States)

    2008-11-01

    Although the current crew rest and duty restrictions for commercial space transportation remain in place, the Federal Aviation Administration (FAA) continues to review the regulation on a regular basis for validity and efficacy based on input from sc...

  15. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  16. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  17. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  18. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  19. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de...

  20. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  1. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  2. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    Science.gov (United States)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This

  3. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  4. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  5. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  6. On Some General Regularities of Formation of the Planetary Systems

    Directory of Open Access Journals (Sweden)

    Belyakov A. V.

    2014-01-01

    Full Text Available J.Wheeler’s geometrodynamic concept has been used, in which space continuum is considered as a topologically non-unitary coherent surface admitting the existence of transitions of the input-output kind between distant regions of the space in an additional dimension. This model assumes the existence of closed structures (micro- and macro- contours formed due to the balance between main interactions: gravitational, electric, magnetic, and inertial forces. It is such macrocontours that have been demonstrated to form — independently of their material basis — the essential structure of objects at various levels of organization of matter. On the basis of this concept in this paper basic regularities acting during formation planetary systems have been obtained. The existence of two sharply different types of planetary systems has been determined. The dependencies linking the masses of the planets, the diameters of the planets, the orbital radii of the planet, and the mass of the central body have been deduced. The possibility of formation of Earth-like planets near brown dwarfs has been grounded. The minimum mass of the planet, which may arise in the planetary system, has been defined.

  7. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    Science.gov (United States)

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  8. Ergodic channel capacity of spatial correlated multiple-input multiple-output free space optical links using multipulse pulse-position modulation

    Science.gov (United States)

    Wang, Huiqin; Wang, Xue; Cao, Minghua

    2017-02-01

    The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.

  9. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  10. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  11. Regularization based on steering parameterized Gaussian filters and a Bhattacharyya distance functional

    Science.gov (United States)

    Lopes, Emerson P.

    2001-08-01

    Template regularization embeds the problem of class separability. In the machine vision perspective, this problem is critical when a textural classification procedure is applied to non-stationary pattern mosaic images. These applications often present low accuracy performance due to disturbance of the classifiers produced by exogenous or endogenous signal regularity perturbations. Natural scene imaging, where the images present certain degree of homogeneity in terms of texture element size or shape (primitives) shows a variety of behaviors, especially varying the preferential spatial directionality. The space-time image pattern characterization is only solved if classification procedures are designed considering the most robust tools within a parallel and hardware perspective. The results to be compared in this paper are obtained using a framework based on multi-resolution, frame and hypothesis approach. Two strategies for the bank of Gabor filters applications are considered: adaptive strategy using the KL transform and fix configuration strategy. The regularization under discussion is accomplished in the pyramid building system instance. The filterings are steering Gaussians controlled by free parameters which are adjusted in accordance with a feedback process driven by hints obtained from sequence of frames interaction functionals pos-processed in the training process and including classification of training set samples as examples. Besides these adjustments there is continuous input data sensitive adaptiveness. The experimental result assessments are focused on two basic issues: Bhattacharyya distance as pattern characterization feature and the combination of KL transform as feature selection and adaptive criterion with the regularization of the pattern Bhattacharyya distance functional (BDF) behavior, using the BDF state separability and symmetry as the main indicators of an optimum framework parameter configuration.

  12. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  13. Unconscious integration of multisensory bodily inputs in the peripersonal space shapes bodily self-consciousness.

    Science.gov (United States)

    Salomon, Roy; Noel, Jean-Paul; Łukowska, Marta; Faivre, Nathan; Metzinger, Thomas; Serino, Andrea; Blanke, Olaf

    2017-09-01

    Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  15. The geometric $\\beta$-function in curved space-time under operator regularization

    OpenAIRE

    Agarwala, Susama

    2009-01-01

    In this paper, I compare the generators of the renormalization group flow, or the geometric $\\beta$-functions for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric $\\beta$-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow for a conformal scalar-field theories on the same manifolds. The geometr...

  16. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  17. Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data

    Science.gov (United States)

    Lavrentiev, M.; Shchemel, A.; Simonov, K.

    A variant of modified training functional that allows considering inaccurate input data is suggested. A limiting case when a part of input data is completely undefined, and, therefore, a problem of reconstruction of hidden parameters should be solved, is also considered. Some numerical experiments are presented. It is assumed that a dependence of known output variables on known input ones should be found is the classic problem definition, which is widely used in the majority of neural nets algorithms. The quality of approximation is evaluated as a performance function. Often the error of the task is evaluated as squared distance between known input data and predicted data multiplied by weighed coefficients. These coefficients may be named "precision coefficients". When inputs are not known exactly, natural generalization of performance function is adding member that responsible for distance between known inputs and shifted inputs, which lessen model's error. It is desirable that the set of variable parameters is compact for training to be con- verging. In the above problem it is possible to choose variants of demands of a priori compactness, which allow meaningful interpretation in the smoothness of the model dependence. Two kinds of regularization was used, first limited squares of coefficients responsible for nonlinearity and second limited multiplication of the above coeffi- cients and linear coefficients. Asymptotic universality of neural net ability to approxi- mate various smooth functions with any accuracy by increase of the number of tunable parameters is often the base for selecting a type of neural net approximation. It is pos- sible to show that used neural net will approach to Fourier integral transform, which approximate abilities are known, with increasing of the number of tunable parameters. In the limiting case, when input data is set with zero precision, the problem of recon- struction of hidden parameters with observed output data appears. The

  18. Discharge regularity in the turtle posterior crista: comparisons between experiment and theory.

    Science.gov (United States)

    Goldberg, Jay M; Holt, Joseph C

    2013-12-01

    Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively.

  19. The geometric β-function in curved space-time under operator regularization

    Energy Technology Data Exchange (ETDEWEB)

    Agarwala, Susama [Mathematical Institute, Oxford University, Oxford OX2 6GG (United Kingdom)

    2015-06-15

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined.

  20. The geometric β-function in curved space-time under operator regularization

    International Nuclear Information System (INIS)

    Agarwala, Susama

    2015-01-01

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined

  1. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    Science.gov (United States)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  2. Consistent momentum space regularization/renormalization of supersymmetric quantum field theories: the three-loop β-function for the Wess-Zumino model

    International Nuclear Information System (INIS)

    Carneiro, David; Sampaio, Marcos; Nemes, Maria Carolina; Scarpelli, Antonio Paulo Baeta

    2003-01-01

    We compute the three loop β function of the Wess-Zumino model to motivate implicit regularization (IR) as a consistent and practical momentum-space framework to study supersymmetric quantum field theories. In this framework which works essentially in the physical dimension of the theory we show that ultraviolet are clearly disentangled from infrared divergences. We obtain consistent results which motivate the method as a good choice to study supersymmetry anomalies in quantum field theories. (author)

  3. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  4. THE REGULARITIES OF THE SPACE-TEMPORAL DISTRIBUTION OF THE RADIATION BALANCE OF THE UNDERLYING SURFACE IN ARAKS BASIN ON MOUNTAINOUS TERRITORY OF THE REPUBLIC OF ARMENIA

    Directory of Open Access Journals (Sweden)

    V. G. Margaryan

    2017-12-01

    Full Text Available The regularities of the space-temporal distribution of the radiation balance of the underlying surface for the conditions of the mountainous territory of the Republic of Armenia were discussed and analyzed.

  5. Fast metabolite identification with Input Output Kernel Regression

    Science.gov (United States)

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  6. Generalized Bregman distances and convergence rates for non-convex regularization methods

    International Nuclear Information System (INIS)

    Grasmair, Markus

    2010-01-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p

  7. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...

  8. Mid-space-independent deformable image registration.

    Science.gov (United States)

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-05-15

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. An oscillating wave energy converter with nonlinear snap-through Power-Take-Off systems in regular waves

    Science.gov (United States)

    Zhang, Xian-tao; Yang, Jian-min; Xiao, Long-fei

    2016-07-01

    Floating oscillating bodies constitute a large class of wave energy converters, especially for offshore deployment. Usually the Power-Take-Off (PTO) system is a directly linear electric generator or a hydraulic motor that drives an electric generator. The PTO system is simplified as a linear spring and a linear damper. However the conversion is less powerful with wave periods off resonance. Thus, a nonlinear snap-through mechanism with two symmetrically oblique springs and a linear damper is applied in the PTO system. The nonlinear snap-through mechanism is characteristics of negative stiffness and double-well potential. An important nonlinear parameter γ is defined as the ratio of half of the horizontal distance between the two springs to the original length of both springs. Time domain method is applied to the dynamics of wave energy converter in regular waves. And the state space model is used to replace the convolution terms in the time domain equation. The results show that the energy harvested by the nonlinear PTO system is larger than that by linear system for low frequency input. While the power captured by nonlinear converters is slightly smaller than that by linear converters for high frequency input. The wave amplitude, damping coefficient of PTO systems and the nonlinear parameter γ affect power capture performance of nonlinear converters. The oscillation of nonlinear wave energy converters may be local or periodically inter well for certain values of the incident wave frequency and the nonlinear parameter γ, which is different from linear converters characteristics of sinusoidal response in regular waves.

  10. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  11. Multiple Input - Multiple Output (MIMO) SAR

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

  12. Quadratic obstructions to small-time local controllability for scalar-input systems

    Science.gov (United States)

    Beauchard, Karine; Marbach, Frédéric

    2018-03-01

    We consider nonlinear finite-dimensional scalar-input control systems in the vicinity of an equilibrium. When the linearized system is controllable, the nonlinear system is smoothly small-time locally controllable: whatever m > 0 and T > 0, the state can reach a whole neighborhood of the equilibrium at time T with controls arbitrary small in Cm-norm. When the linearized system is not controllable, we prove that: either the state is constrained to live within a smooth strict manifold, up to a cubic residual, or the quadratic order adds a signed drift with respect to it. This drift holds along a Lie bracket of length (2 k + 1), is quantified in terms of an H-k-norm of the control, holds for controls small in W 2 k , ∞-norm and these spaces are optimal. Our proof requires only C3 regularity of the vector field. This work underlines the importance of the norm used in the smallness assumption on the control, even in finite dimension.

  13. Influence of the input database in detecting fire space-time clusters

    Science.gov (United States)

    Pereira, Mário; Costa, Ricardo; Tonini, Marj; Vega Orozco, Carmen; Parente, Joana

    2015-04-01

    Fire incidence variability is influenced by local environmental variables such as topography, land use, vegetation and weather conditions. These induce a cluster pattern of the fire events distribution. The space-time permutation scan statistics (STPSS) method developed by Kulldorff et al. (2005) and implemented in the SaTScanTM software (http://www.satscan.org/) proves to be able to detect space-time clusters in many different fields, even when using incomplete and/or inaccurate input data. Nevertheless, the dependence of the STPSS method on the different characteristics of different datasets describing the same environmental phenomenon has not been studied yet. In this sense, the objective of this study is to assess the robustness of the STPSS for detecting real clusters using different input datasets and to justify the obtained results. This study takes advantage of the existence of two very different official fire datasets currently available for Portugal, both provided by the Institute for the Conservation of Nature and Forests. The first one is the aggregated Portuguese Rural Fire Database PRFD (Pereira et al., 2011), which is based on ground measurements and provides detailed information about the ignition and extinction date/time and the area burnt by each fire in forest, scrubs and agricultural areas. However, in the PRFD, the fire location of each fire is indicated by the name of smallest administrative unit (the parish) where the ignition occurred. Consequently, since the application of the STPSS requires the geographic coordinates of the events, the centroid of the parishes was considered. The second fire dataset is the national mapping burnt areas (NMBA), which is based on satellite measurements and delivered in shape file format. The NMBA provides a detailed spatial information (shape and size of each fire) but the temporal information is restricted to the year of occurrence. Besides these differences, the two datasets cover different periods, they

  14. Regular Gleason Measures and Generalized Effect Algebras

    Science.gov (United States)

    Dvurečenskij, Anatolij; Janda, Jiří

    2015-12-01

    We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.

  15. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  16. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...

  17. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  18. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-12-21

    The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.

  19. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  20. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  1. State Estimation of International Space Station Centrifuge Rotor With Incomplete Knowledge of Disturbance Inputs

    Science.gov (United States)

    Sullivan, Michael J.

    2005-01-01

    This thesis develops a state estimation algorithm for the Centrifuge Rotor (CR) system where only relative measurements are available with limited knowledge of both rotor imbalance disturbances and International Space Station (ISS) thruster disturbances. A Kalman filter is applied to a plant model augmented with sinusoidal disturbance states used to model both the effect of the rotor imbalance and the 155 thrusters on the CR relative motion measurement. The sinusoidal disturbance states compensate for the lack of the availability of plant inputs for use in the Kalman filter. Testing confirms that complete disturbance modeling is necessary to ensure reliable estimation. Further testing goes on to show that increased estimator operational bandwidth can be achieved through the expansion of the disturbance model within the filter dynamics. In addition, Monte Carlo analysis shows the varying levels of robustness against defined plant/filter uncertainty variations.

  2. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  3. Regular black hole in three dimensions

    OpenAIRE

    Myung, Yun Soo; Yoon, Myungseok

    2008-01-01

    We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.

  4. Dimensional versus lattice regularization within Luescher's Yang Mills theory

    International Nuclear Information System (INIS)

    Diekmann, B.; Langer, M.; Schuette, D.

    1993-01-01

    It is pointed out that the coefficients of Luescher's effective model space Hamiltonian, which is based upon dimensional regularization techniques, can be reproduced by applying folded diagram perturbation theory to the Kogut Susskind Hamiltonian and by performing a lattice continuum limit (keeping the volume fixed). Alternative cutoff regularizations of the Hamiltonian are in general inconsistent, the critical point beeing the correct prediction for Luescher's tadpole coefficient which is formally quadratically divergent and which has to become a well defined (negative) number. (orig.)

  5. Regularization and asymptotic expansion of certain distributions defined by divergent series

    Directory of Open Access Journals (Sweden)

    Ricardo Estrada

    1995-01-01

    Full Text Available The regularization of the distribution ∑n=−∞∞δ(x−pn. which gives a regularized value to the divergent series ∑n=−∞∞φ(pn is obtained in several spaces of test functions. The asymptotic expansion as ϵ→0+of series of the type ∑n=0∞φ(ϵ pn is also obtained.

  6. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  7. Simulation of Canopy CO2/H2O Fluxes for a Rubber (Hevea Brasiliensis) Plantation in Central Cambodia: The Effect of the Regular Spacing of Planted Trees

    Energy Technology Data Exchange (ETDEWEB)

    Kumagai, Tomo' omi; Mudd, Ryan; Miyazawa, Yoshiyuki; Liu, Wen; Giambelluca, Thomas; Kobayashi, N.; Lim, Tiva Khan; Jomura, Mayuko; Matsumoto, Kazuho; Huang, Maoyi; Chen, Qi; Ziegler, Alan; Yin, Song

    2013-09-10

    We developed a soil-vegetation-atmosphere transfer (SVAT) model applicable to simulating CO2 and H2O fluxes from the canopies of rubber plantations, which are characterized by distinct canopy clumping produced by regular spacing of plantation trees. Rubber (Hevea brasiliensis Müll. Arg.) plantations, which are rapidly expanding into both climatically optimal and sub-optimal environments throughout mainland Southeast Asia, potentially change the partitioning of water, energy, and carbon at multiple scales, compared with traditional land covers it is replacing. Describing the biosphere-atmosphere exchange in rubber plantations via SVAT modeling is therefore essential to understanding the impacts on environmental processes. The regular spacing of plantation trees creates a peculiar canopy structure that is not well represented in most SVAT models, which generally assumes a non-uniform spacing of vegetation. Herein we develop a SVAT model applicable to rubber plantation and an evaluation method for its canopy structure, and examine how the peculiar canopy structure of rubber plantations affects canopy CO2 and H2O exchanges. Model results are compared with measurements collected at a field site in central Cambodia. Our findings suggest that it is crucial to account for intensive canopy clumping in order to reproduce observed rubber plantation fluxes. These results suggest a potentially optimal spacing of rubber trees to produce high productivity and water use efficiency.

  8. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  9. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    Science.gov (United States)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  10. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Science.gov (United States)

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  11. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    Science.gov (United States)

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  12. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Directory of Open Access Journals (Sweden)

    Strait Dana L

    2011-10-01

    regularities in auditory input. Definition of common biological underpinnings for music and reading supports the usefulness of music for promoting child literacy, with the potential to improve reading remediation.

  13. The fiber bundle formalism for the quantization in curved spaces

    International Nuclear Information System (INIS)

    Wyrozumski, T.

    1989-01-01

    We set up a geometrical formulation of the canonical quantization of free Klein-Gordon field on a gravitational background. We introduce the notion of the Bogolubov bundle as the principal fiber bundle over the space of all Cauchy surfaces belonging to some fixed foliation of space-time, with the Bogolubov group as the structure group, as a tool in considering local Bogolubov transformations. Sections of the associated complex structure bundle have the meaning of attaching Hilbert spaces to Cauchy surfaces. We single out, as physical, sections defined by the equation of parallel transport on the Bogolubov bundle. The connection is then subjected to a certain nonlinear differential equation. We find a particular solution, which happens to coincide with a formula given by L.Parker for Robertson-Walker space-times. Finally, we adopt the adiabatic hypothesis as the physical input to the formalism and fix in this way a free parameter in the connection. Concluding, we comment on a possible geometrical interpretation of the regularization of stress-energy tensor and on generalizations of the formalism toward quantum gravity. 14 refs. (Author)

  14. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  15. DEEBAR - A BASIC interactive computer programme for estimating mean resonance spacings

    International Nuclear Information System (INIS)

    Booth, M.; Pope, A.L.; Smith, R.W.; Story, J.S.

    1988-02-01

    DEEBAR is a BASIC interactive programme, which uses the theories of Dyson and of Dyson and Mehta, to compute estimates of the mean resonance spacings and associated uncertainty statistics from an input file of neutron resonance energies. In applying these theories the broad scale energy dependence of D-bar, as predicted by the ordinary theory of level densities, is taken into account. The mean spacing D-bar ± δD-bar, referred to zero energy of the incident neutrons, is computed from the energies of the first k resonances, for k = 2,3...K in turn and as if no resonances are missing. The user is asked to survey this set of D-bar and δD-bar values and to form a judgement - up to what value of k is the set of resonances complete and what value, in consequence, does the user adopt as the preferred value of D-bar? When the preferred values for k and D-bar have been input, the programme calculates revised values for the level density parameters, consistent with this value for D-bar and with other input information. Two short tables are printed, illustrating the energy variation and spin dependence of D-bar. Dyson's formula based on his Coulomb gas analogy is used for estimating the most likely energies of the topmost bound levels. Finally the quasi-crystalline character of a single level series is exploited by means of a table in which the resonance energies are set alongside an energy ladder whose rungs are regularly spaced with spacing D-bar(E); this comparative table expedites the search for gaps where resonances may have been missed experimentally. Used in conjunction with the program LJPROB, which calculates neutron strengths and compares them against the expected Porter Thomas distribution, estimates of the statistical parameters for use in the unresolved resonance region may be derived. (author)

  16. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  17. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  18. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  19. Hessian regularization based symmetric nonnegative matrix factorization for clustering gene expression and microbiome data.

    Science.gov (United States)

    Ma, Yuanyuan; Hu, Xiaohua; He, Tingting; Jiang, Xingpeng

    2016-12-01

    Nonnegative matrix factorization (NMF) has received considerable attention due to its interpretation of observed samples as combinations of different components, and has been successfully used as a clustering method. As an extension of NMF, Symmetric NMF (SNMF) inherits the advantages of NMF. Unlike NMF, however, SNMF takes a nonnegative similarity matrix as an input, and two lower rank nonnegative matrices (H, H T ) are computed as an output to approximate the original similarity matrix. Laplacian regularization has improved the clustering performance of NMF and SNMF. However, Laplacian regularization (LR), as a classic manifold regularization method, suffers some problems because of its weak extrapolating ability. In this paper, we propose a novel variant of SNMF, called Hessian regularization based symmetric nonnegative matrix factorization (HSNMF), for this purpose. In contrast to Laplacian regularization, Hessian regularization fits the data perfectly and extrapolates nicely to unseen data. We conduct extensive experiments on several datasets including text data, gene expression data and HMP (Human Microbiome Project) data. The results show that the proposed method outperforms other methods, which suggests the potential application of HSNMF in biological data clustering. Copyright © 2016. Published by Elsevier Inc.

  20. Nonlocal Regularized Algebraic Reconstruction Techniques for MRI: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Xin Li

    2013-01-01

    Full Text Available We attempt to revitalize researchers' interest in algebraic reconstruction techniques (ART by expanding their capabilities and demonstrating their potential in speeding up the process of MRI acquisition. Using a continuous-to-discrete model, we experimentally study the application of ART into MRI reconstruction which unifies previous nonuniform-fast-Fourier-transform- (NUFFT- based and gridding-based approaches. Under the framework of ART, we advocate the use of nonlocal regularization techniques which are leveraged from our previous research on modeling photographic images. It is experimentally shown that nonlocal regularization ART (NR-ART can often outperform their local counterparts in terms of both subjective and objective qualities of reconstructed images. On one real-world k-space data set, we find that nonlocal regularization can achieve satisfactory reconstruction from as few as one-third of samples. We also address an issue related to image reconstruction from real-world k-space data but overlooked in the open literature: the consistency of reconstructed images across different resolutions. A resolution-consistent extension of NR-ART is developed and shown to effectively suppress the artifacts arising from frequency extrapolation. Both source codes and experimental results of this work are made fully reproducible.

  1. Input and Intake in Language Acquisition

    Science.gov (United States)

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  2. Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies

    Science.gov (United States)

    Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration

    Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.

  3. Regular and stochastic particle motion in plasma dynamics

    International Nuclear Information System (INIS)

    Kaufman, A.N.

    1979-08-01

    A Hamiltonian formalism is presented for the study of charged-particle trajectories in the self-consistent field of the particles. The intention is to develop a general approach to plasma dynamics. Transformations of phase-space variables are used to separate out the regular, adiabatic motion from the irregular, stochastic trajectories. Several new techniques are included in this presentation

  4. Intrinsic Regularization in a Lorentz invariant non-orthogonal Euclidean Space

    OpenAIRE

    Tornow, Carmen

    2006-01-01

    It is shown that the Lorentz transformations can be derived for a non-orthogonal Euclidean space. In this geometry one finds the same relations of special relativity as the ones known from the orthogonal Minkowski space. In order to illustrate the advantage of a non-orthogonal Euclidean metric the two-point Green’s function at x = 0 for a self-interacting scalar field is calculated. In contrast to the Minkowski space the one loop mass correction derived from this function gives a convergent r...

  5. Effective action for scalar fields and generalized zeta-function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio

    2004-01-01

    Motivated by the study of quantum fields in a Friedmann-Robertson-Walker space-time, the one-loop effective action for a scalar field defined in the ultrastatic manifold RxH 3 /Γ, H 3 /Γ being the finite volume, noncompact, hyperbolic spatial section, is investigated by a generalization of zeta-function regularization. It is shown that additional divergences may appear at the one-loop level. The one-loop renormalizability of the model is discussed and, making use of a generalization of zeta-function regularization, the one-loop renormalization group equations are derived

  6. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  7. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  8. Statistical regularities in the rank-citation profile of scientists.

    Science.gov (United States)

    Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

  9. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    Science.gov (United States)

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-09-01

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  10. Regularizing Feynman path integrals using the generalized Kontsevich-Vishik trace

    Science.gov (United States)

    Hartung, Tobias

    2017-12-01

    A fully regulated definition of Feynman's path integral is presented here. The proposed re-formulation of the path integral coincides with the familiar formulation whenever the path integral is well defined. In particular, it is consistent with respect to lattice formulations and Wick rotations, i.e., it can be used in Euclidean and Minkowski space-time. The path integral regularization is introduced through the generalized Kontsevich-Vishik trace, that is, the extension of the classical trace to Fourier integral operators. Physically, we are replacing the time-evolution semi-group by a holomorphic family of operators such that the corresponding path integrals are well defined in some half space of C . The regularized path integral is, thus, defined through analytic continuation. This regularization can be performed by means of stationary phase approximation or computed analytically depending only on the Hamiltonian and the observable (i.e., known a priori). In either case, the computational effort to evaluate path integrals or expectations of observables reduces to the evaluation of integrals over spheres. Furthermore, computations can be performed directly in the continuum and applications (analytic computations and their implementations) to a number of models including the non-trivial cases of the massive Schwinger model and a φ4 theory.

  11. Analysis of regularized inversion of data corrupted by white Gaussian noise

    International Nuclear Information System (INIS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-01-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)

  12. 2-regularity and 2-normality conditions for systems with impulsive controls

    Directory of Open Access Journals (Sweden)

    Pavlova Natal'ya

    2007-01-01

    Full Text Available In this paper a controlled system with impulsive controls in the neighborhood of an abnormal point is investigated. The set of pairs (u,μ is considered as a class of admissible controls, where u is a measurable essentially bounded function and μ is a finite-dimensional Borel measure, such that for any Borel set B, μ(B is a subset of the given convex closed pointed cone. In this article the concepts of 2-regularity and 2-normality for the abstract mapping Ф, operating from the given Banach space into a finite-dimensional space, are introduced. The concepts of 2-regularity and 2-normality play a great role in the course of derivation of the first and the second order necessary conditions for the optimal control problem, consisting of the minimization of a certain functional on the set of the admissible processes. These concepts are also important for obtaining the sufficient conditions for the local controllability of the nonlinear systems. The convenient criterion for 2-regularity along the prescribed direction and necessary conditions for 2-normality of systems, linear in control, are introduced in this article as well.

  13. Constraining the loop quantum gravity parameter space from phenomenology

    Science.gov (United States)

    Brahma, Suddhasattwa; Ronco, Michele

    2018-03-01

    Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.

  14. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  15. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  16. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    Science.gov (United States)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety

  17. Multi-view clustering via multi-manifold regularized non-negative matrix factorization.

    Science.gov (United States)

    Zong, Linlin; Zhang, Xianchao; Zhao, Long; Yu, Hong; Zhao, Qianli

    2017-04-01

    Non-negative matrix factorization based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, non-negative matrix factorization fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized non-negative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF incorporates consensus manifold and consensus coefficient matrix with multi-manifold regularization to preserve the locally geometrical structure of the multi-view data space. We use two methods to construct the consensus manifold and two methods to find the consensus coefficient matrix, which leads to four instances of the framework. Experimental results show that the proposed algorithms outperform existing non-negative matrix factorization based algorithms for multi-view clustering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  19. Instabilities of the zeta-function regularization in the presence of symmetries

    International Nuclear Information System (INIS)

    Rasetti, M.

    1980-01-01

    The zeta-function regularization method requires the calculation of the spectrum-generating function zeta sub(M) of a generic real, elliptic, self-adjoint differential operator on a manifold M. An asymptotic expansion for zeta sub(M) is given for the class of all symmetric spaces of rank 1, sufficient to compute its Mellin transform and deduce the regularization of the corresponding quadratic path integrals. The summability properties of the generalized zeta-function introduce physical instabilities in the system as negative specific heat. The technique (and the instability as well) is shown to hold - under the assumed symmetry properties - in any dimension (preserving both the global and local properties of the manifold, as opposed to the dimensional regularization, where one adds extra flat dimensions only). (author)

  20. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  1. Fermion-number violation in regularizations that preserve fermion-number symmetry

    Science.gov (United States)

    Golterman, Maarten; Shamir, Yigal

    2003-01-01

    There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.

  2. Asymptotic analysis of a pile-up of regular edge dislocation walls

    KAUST Repository

    Hall, Cameron L.

    2011-12-01

    The idealised problem of a pile-up of regular dislocation walls (that is, of planes each containing an infinite number of parallel, identical and equally spaced dislocations) was presented by Roy et al. [A. Roy, R.H.J. Peerlings, M.G.D. Geers, Y. Kasyanyuk, Materials Science and Engineering A 486 (2008) 653-661] as a prototype for understanding the importance of discrete dislocation interactions in dislocation-based plasticity models. They noted that analytic solutions for the dislocation wall density are available for a pile-up of regular screw dislocation walls, but that numerical methods seem to be necessary for investigating regular edge dislocation walls. In this paper, we use the techniques of discrete-to-continuum asymptotic analysis to obtain a detailed description of a pile-up of regular edge dislocation walls. To leading order, we find that the dislocation wall density is governed by a simple differential equation and that boundary layers are present at both ends of the pile-up. © 2011 Elsevier B.V.

  3. Asymptotic analysis of a pile-up of regular edge dislocation walls

    KAUST Repository

    Hall, Cameron L.

    2011-01-01

    The idealised problem of a pile-up of regular dislocation walls (that is, of planes each containing an infinite number of parallel, identical and equally spaced dislocations) was presented by Roy et al. [A. Roy, R.H.J. Peerlings, M.G.D. Geers, Y. Kasyanyuk, Materials Science and Engineering A 486 (2008) 653-661] as a prototype for understanding the importance of discrete dislocation interactions in dislocation-based plasticity models. They noted that analytic solutions for the dislocation wall density are available for a pile-up of regular screw dislocation walls, but that numerical methods seem to be necessary for investigating regular edge dislocation walls. In this paper, we use the techniques of discrete-to-continuum asymptotic analysis to obtain a detailed description of a pile-up of regular edge dislocation walls. To leading order, we find that the dislocation wall density is governed by a simple differential equation and that boundary layers are present at both ends of the pile-up. © 2011 Elsevier B.V.

  4. On the structure of space-time caustics

    International Nuclear Information System (INIS)

    Rosquist, K.

    1983-01-01

    Caustics formed by timelike and null geodesics in a space-time M are investigated. Care is taken to distinguish the conjugate points in the tangent space (T-conjugate points) from conjugate points in the manifold (M-conjugate points). It is shown that most nonspacelike conjugate points are regular, i.e. with all neighbouring conjugate points having the same degree of degeneracy. The regular timelike T-conjugate locus is shown to be a smooth 3-dimensional submanifold of the tangent space. Analogously, the regular null T-conjugate locus is shown to be a smooth 2-dimensional submanifold of the light cone in the tangent space. The smoothness properties of the null caustic are used to show that if an observer sees focusing in all directions, then there will necessarily be a cusp in the caustic. If, in addition, all the null conjugate points have maximal degree of degeneracy (as in the closed Friedmann-Robertson-Walker universes), then the space-time is closed. (orig.)

  5. Double Sequences and Iterated Limits in Regular Space

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-09-01

    Full Text Available First, we define in Mizar [5], the Cartesian product of two filters bases and the Cartesian product of two filters. After comparing the product of two Fréchet filters on ℕ (F1 with the Fréchet filter on ℕ × ℕ (F2, we compare limF₁ and limF₂ for all double sequences in a non empty topological space.

  6. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  7. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  8. Perturbative formulation of pure space-like axial gauge QED with infrared divergences regularized by residual gauge fields

    International Nuclear Information System (INIS)

    Nakawaki, Yuji; McCartor, Gary

    2006-01-01

    We construct a new perturbative formulation of pure space-like axial gauge QED in which the inherent infrared divergences are regularized by residual gauge fields. For this purpose, we carry out our calculations in the coordinates x μ =(x + , x - , x 1 , x 2 ), where x + =x 0 sinθ + x 3 cosθ and x - = x 0 cosθ - x 3 sinθ. Here, A=A 0 cosθ + A 3 sinθ = n·A=0 is taken as the gauge fixing condition. We show in detail that, in perturbation theory, infrared divergences resulting from the residual gauge fields cancel infrared divergences resulting from the physical parts of the gauge field. As a result, we obtain the gauge field propagator proposed by Mandelstam and Leibbrandt. By taking the limit θ→π/4, we are able to construct a light-cone formulation that is free from infrared divergences. With that analysis complete, we next calculate the one-loop electron self-energy, something not previously done in the light-cone quantization and light-cone gauge. (author)

  9. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  10. Asymptotic properties of spherically symmetric, regular and static solutions to Yang-Mills equations

    International Nuclear Information System (INIS)

    Cronstrom, C.

    1987-01-01

    In this paper the author discusses the asymptotic properties of solutions to Yang-Mills equations with the gauge group SU(2), for spherically symmetric, regular and static potentials. It is known, that the pure Yang-Mills equations cannot have nontrivial regular solutions which vanish rapidly at space infinity (socalled finite energy solutions). So, if regular solutions exist, they must have non-trivial asymptotic properties. However, if the asymptotic behaviour of the solutions is non-trivial, then the fact must be explicitly taken into account in constructing the proper action (and energy) for the theory. The elucidation of the appropriate surface correction to the Yang-Mills action (and hence the energy-momentum tensor density) is one of the main motivations behind the present study. In this paper the author restricts to the asymptotic behaviour of the static solutions. It is shown that this asymptotic behaviour is such that surface corrections (at space-infinity) are needed in order to obtain a well-defined (classical) theory. This is of relevance in formulating a quantum Yang-Mills theory

  11. Tunneling into quantum wires: regularization of the tunneling Hamiltonian and consistency between free and bosonized fermions

    OpenAIRE

    Filippone, Michele; Brouwer, Piet

    2016-01-01

    Tunneling between a point contact and a one-dimensional wire is usually described with the help of a tunneling Hamiltonian that contains a delta function in position space. Whereas the leading order contribution to the tunneling current is independent of the way this delta function is regularized, higher-order corrections with respect to the tunneling amplitude are known to depend on the regularization. Instead of regularizing the delta function in the tunneling Hamiltonian, one may also obta...

  12. PCC/SRC, PCC and SRC Calculation from Multivariate Input for Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.; Johnson, J.D.

    1995-01-01

    1 - Description of program or function: PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model. 2 - Method of solution: PCC/SRC calculates the coefficients on either the original observations or on the ranks of the original observations. These coefficients provide alternative measures of the relative contribution (importance) of each of the various input variables to the observed variations in output. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history, PCC/SRC will generate a graph of the coefficients over time or space for each input-variable, output- variable combination of interest, indicating the importance of each input value over time or space. 3 - Restrictions on the complexity of the problem - Maxima of: 100 observations, 100 different time steps or intervals between successive dependent variable readings, 50 independent variables (model input), 20 dependent variables (model output). 10 ordered triples specifying intervals between dependent variable readings

  13. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  14. Self-Structured Organizing Single-Input CMAC Control for Robot Manipulator

    Directory of Open Access Journals (Sweden)

    ThanhQuyen Ngo

    2011-09-01

    Full Text Available This paper represents a self-structured organizing single-input control system based on differentiable cerebellar model articulation controller (CMAC for an n-link robot manipulator to achieve the high-precision position tracking. In the proposed scheme, the single-input CMAC controller is solely used to control the plant, so the input space dimension of CMAC can be simplified and no conventional controller is needed. The structure of single-input CMAC will also be self-organized; that is, the layers of single-input CMAC will grow or prune systematically and their receptive functions can be automatically adjusted. The online tuning laws of single-input CMAC parameters are derived in gradient-descent learning method and the discrete-type Lyapunov function is applied to determine the learning rates of proposed control system so that the stability of the system can be guaranteed. The simulation results of robot manipulator are provided to verify the effectiveness of the proposed control methodology.

  15. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. Regularity criteria for the Navier–Stokes equations based on one component of velocity

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Caggio, M.; Skalák, Zdeněk

    2017-01-01

    Roč. 35, June (2017), s. 379-396 ISSN 1468-1218 R&D Projects: GA ČR GA14-02067S Grant - others:Západočeská univerzita(CZ) SGS-2016-003; National Natural Science Foundation of China (CN) 11301394 Institutional support: RVO:67985874 Keywords : Navier–Stokes equations * regularity of solutions * regularity criteria * Anisotropic Lebesgue spaces Subject RIV: BK - Fluid Dynamics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.659, year: 2016

  17. Regularity criteria for the Navier–Stokes equations based on one component of velocity

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Caggio, M.; Skalák, Zdeněk

    2017-01-01

    Roč. 35, June (2017), s. 379-396 ISSN 1468-1218 R&D Projects: GA ČR GA14-02067S Grant - others:Západočeská univerzita(CZ) SGS-2016-003; National Natural Science Foundation of China(CN) 11301394 Institutional support: RVO:67985874 Keywords : Navier–Stokes equations * regularity of solutions * regularity criteria * Anisotropic Lebesgue spaces Subject RIV: BK - Fluid Dynamics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.659, year: 2016

  18. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  19. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  20. Influence of the volume ratio of solid phase on carrying capacity of regular porous structure

    Directory of Open Access Journals (Sweden)

    Monkova Katarina

    2017-01-01

    Full Text Available Direct metal laser sintering is spread technology today. The main advantage of this method is the ability to produce parts which have a very complex geometry and which can be produced only in very complicated way by classical conventional methods. Special category of such components are parts with porous structure, which can give to the product extraordinary combination of properties. The article deals with some aspects that influence the manufacturing of regular porous structures in spite of the fact that input technological parameters at various samples were the same. The main goal of presented research has been to investigate the influence of the volume ratio of solid phase on carrying capacity of regular porous structure. Realized tests have indicated that the unit of regular porous structure with lower volume ratio is able to carry a greater load to failure than the unit with higher volume ratio.

  1. Regularized quasinormal modes for plasmonic resonators and open cavities

    Science.gov (United States)

    Kamandar Dezfouli, Mohsen; Hughes, Stephen

    2018-03-01

    Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.

  2. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  3. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  4. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  5. RBSURFpred: Modeling protein accessible surface area in real and binary space using regularized and optimized regression.

    Science.gov (United States)

    Tarafder, Sumit; Toukir Ahmed, Md; Iqbal, Sumaiya; Tamjidul Hoque, Md; Sohel Rahman, M

    2018-03-14

    Accessible surface area (ASA) of a protein residue is an effective feature for protein structure prediction, binding region identification, fold recognition problems etc. Improving the prediction of ASA by the application of effective feature variables is a challenging but explorable task to consider, specially in the field of machine learning. Among the existing predictors of ASA, REGAd 3 p is a highly accurate ASA predictor which is based on regularized exact regression with polynomial kernel of degree 3. In this work, we present a new predictor RBSURFpred, which extends REGAd 3 p on several dimensions by incorporating 58 physicochemical, evolutionary and structural properties into 9-tuple peptides via Chou's general PseAAC, which allowed us to obtain higher accuracies in predicting both real-valued and binary ASA. We have compared RBSURFpred for both real and binary space predictions with state-of-the-art predictors, such as REGAd 3 p and SPIDER2. We also have carried out a rigorous analysis of the performance of RBSURFpred in terms of different amino acids and their properties, and also with biologically relevant case-studies. The performance of RBSURFpred establishes itself as a useful tool for the community. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  7. SQED two-loop beta function in the context of Implicit regularization

    International Nuclear Information System (INIS)

    Cherchiglia, Adriano Lana; Sampaio, Marcos; Nemes, Maria Carolina

    2013-01-01

    Full text: In this work we present the state-of-art for Implicit Regularization (IReg) in the context of supersymmetric theories. IReg is a four-dimensional regularization technique in momentum space which disentangles, in a consistent way at arbitrary order, the divergencies, regularization dependent and finite parts of any Feynman amplitude. Since it does not resort to modifications on the physical space-time dimensions of the underlying quantum field theoretical model, it can be consistently applied to supersymmetric theories. First we describe the technique and present previous results for supersymmetric models: the two-loop beta function for the Wess-Zumino model (both in the component and superfield formalism); the two-loop beta function for Super Yang-Mills (in the superfield formalism using the background field technique). After, we present our calculation of the two-loop beta function for massless and massive SQED using the superfield formalism with and without resorting to the background field technique. We find that only in the second case the two-loop divergence cancels out. We argue it is due to an anomalous Jacobian under the rescaling of the fields in the path-integral which is necessary for the application of the supersymmetric background field technique. We find, however, that in both cases the two-loop coefficients of beta function are non-null. Finally we briefly discuss the anomaly puzzle in the context of our technique. (author)

  8. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...

  9. Real time QRS complex detection using DFA and regular grammar.

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed Hedi

    2017-02-28

    The sequence of Q, R, and S peaks (QRS) complex detection is a crucial procedure in electrocardiogram (ECG) processing and analysis. We propose a novel approach for QRS complex detection based on the deterministic finite automata with the addition of some constraints. This paper confirms that regular grammar is useful for extracting QRS complexes and interpreting normalized ECG signals. A QRS is assimilated to a pair of adjacent peaks which meet certain criteria of standard deviation and duration. The proposed method was applied on several kinds of ECG signals issued from the standard MIT-BIH arrhythmia database. A total of 48 signals were used. For an input signal, several parameters were determined, such as QRS durations, RR distances, and the peaks' amplitudes. σRR and σQRS parameters were added to quantify the regularity of RR distances and QRS durations, respectively. The sensitivity rate of the suggested method was 99.74% and the specificity rate was 99.86%. Moreover, the sensitivity and the specificity rates variations according to the Signal-to-Noise Ratio were performed. Regular grammar with the addition of some constraints and deterministic automata proved functional for ECG signals diagnosis. Compared to statistical methods, the use of grammar provides satisfactory and competitive results and indices that are comparable to or even better than those cited in the literature.

  10. Differential regularization and renormalization: a new method of calculation in quantum field theory

    International Nuclear Information System (INIS)

    Freedman, D.Z.; Johnson, K.; Latorre, J.I.

    1992-01-01

    Most primitively divergent Feynman diagrams are well defined in x-space but too singular at short distances for transformation to p-space. A new method of regularization is developed in which singular functions are written as derivatives of less singular functions which contain a logarithmic mass scale. The Fourier transform is then defined by formal integration by parts. The procedure is extended to graphs with divergent subgraphs. No explicit cutoff or counterterms are required, and the method automatically delivers renormalized amplitudes which satisfy Callan-Symanzik equations. These features are thoroughly explored in massless φ 4 theory through 3-loop order, and the method yields explicit functional forms for all amplitudes with less difficulty than conventional methods which use dimensional regularization in p-space. The procedure also appears to be compatible with gauge invariance and the chiral structure of the standard model. This aspect is tested in extensive 1-loop calculations which include the Ward identity in quantum electrodynamics, the chiral anomaly, and the background field algorithm in non-abelian gauge theories. (orig.)

  11. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  12. Regularity of C*-algebras and central sequence algebras

    DEFF Research Database (Denmark)

    Christensen, Martin S.

    The main topic of this thesis is regularity properties of C*-algebras and how these regularity properties are re ected in their associated central sequence algebras. The thesis consists of an introduction followed by four papers [A], [B], [C], [D]. In [A], we show that for the class of simple...... Villadsen algebra of either the rst type with seed space a nite dimensional CW complex, or the second type, tensorial absorption of the Jiang-Su algebra is characterized by the absence of characters on the central sequence algebra. Additionally, in a joint appendix with Joan Bosa, we show that the Villadsen...... algebra of the second type with innite stable rank fails the corona factorization property. In [B], we consider the class of separable C*-algebras which do not admit characters on their central sequence algebra, and show that it has nice permanence properties. We also introduce a new divisibility property...

  13. The Validity of Dimensional Regularization Method on Fractal Spacetime

    Directory of Open Access Journals (Sweden)

    Yong Tao

    2013-01-01

    Full Text Available Svozil developed a regularization method for quantum field theory on fractal spacetime (1987. Such a method can be applied to the low-order perturbative renormalization of quantum electrodynamics but will depend on a conjectural integral formula on non-integer-dimensional topological spaces. The main purpose of this paper is to construct a fractal measure so as to guarantee the validity of the conjectural integral formula.

  14. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    Science.gov (United States)

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge

  15. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  16. Shape-constrained regularization by statistical multiresolution for inverse problems: asymptotic analysis

    International Nuclear Information System (INIS)

    Frick, Klaus; Marnitz, Philipp; Munk, Axel

    2012-01-01

    This paper is concerned with a novel regularization technique for solving linear ill-posed operator equations in Hilbert spaces from data that are corrupted by white noise. We combine convex penalty functionals with extreme-value statistics of projections of the residuals on a given set of sub-spaces in the image space of the operator. We prove general consistency and convergence rate results in the framework of Bregman divergences which allows for a vast range of penalty functionals. Various examples that indicate the applicability of our approach will be discussed. We will illustrate in the context of signal and image processing that the presented method constitutes a locally adaptive reconstruction method. (paper)

  17. Guangxi crustal structural evolution and the formation and distribution regularities of U-rich strata

    International Nuclear Information System (INIS)

    Kang Zili.

    1989-01-01

    Based on summing up Guangxi geotectonic features and evolutionary regularities, this paper discusses the occurrence features, formation conditions and time-space distribution regularities of various U-rich strata during the development of geosyncline, platform and diwa stages, Especially, during diwa stage all those U-rich strata might be reworked to a certain degree and resulted in the mobilization of uranium, then enriching to form polygenetic composite uranium ore deposits with stratabound features. This study will be helpful for prospecting in the region

  18. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  19. The regular indefinite linear-quadratic problem with linear endpoint constraints

    NARCIS (Netherlands)

    Soethoudt, J.M.; Trentelman, H.L.

    1989-01-01

    This paper deals with the infinite horizon linear-quadratic problem with indefinite cost. Given a linear system, a quadratic cost functional and a subspace of the state space, we consider the problem of minimizing the cost functional over all inputs for which the state trajectory converges to that

  20. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  1. Regularization scheme dependence of virtual corrections to DY and DIS

    International Nuclear Information System (INIS)

    Khalafi, F.; Landshoff, P.V.

    1981-01-01

    One loop virtual corrections to the quark photon vertex are calculated under various assumptions and their sensitivity to the manner in which infra-red and mass singularities are regularized is studied. A method based on the use of Mellin-transforms in the Feynman parametric space is developed and shown to be convenient in calculating virtual diagrams beyond the leading logarithm in perturbative QCD. (orig.)

  2. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  3. Regular and chaotic dynamics in time-dependent relativistic mean-field theory

    International Nuclear Information System (INIS)

    Vretenar, D.; Ring, P.; Lalazissis, G.A.; Poeschl, W.

    1997-01-01

    Isoscalar and isovector monopole oscillations that correspond to giant resonances in spherical nuclei are described in the framework of time-dependent relativistic mean-field theory. Time-dependent and self-consistent calculations that reproduce experimental data on monopole resonances in 208 Pb show that the motion of the collective coordinate is regular for isoscalar oscillations, and that it becomes chaotic when initial conditions correspond to the isovector mode. Regular collective dynamics coexists with chaotic oscillations on the microscopic level. Time histories, Fourier spectra, state-space plots, Poincare sections, autocorrelation functions, and Lyapunov exponents are used to characterize the nonlinear system and to identify chaotic oscillations. Analogous considerations apply to higher multipolarities. copyright 1997 The American Physical Society

  4. Enhanced manifold regularization for semi-supervised classification.

    Science.gov (United States)

    Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong

    2016-06-01

    Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.

  5. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  6. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  7. Metric modular spaces

    CERN Document Server

    Chistyakov, Vyacheslav

    2015-01-01

    Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric  and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

  8. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  9. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  10. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  11. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  12. Analytic semigroups and optimal regularity in parabolic problems

    CERN Document Server

    Lunardi, Alessandra

    2012-01-01

    The book shows how the abstract methods of analytic semigroups and evolution equations in Banach spaces can be fruitfully applied to the study of parabolic problems. Particular attention is paid to optimal regularity results in linear equations. Furthermore, these results are used to study several other problems, especially fully nonlinear ones. Owing to the new unified approach chosen, known theorems are presented from a novel perspective and new results are derived. The book is self-contained. It is addressed to PhD students and researchers interested in abstract evolution equations and in p

  13. Input preshaping with frequency domain information for flexible-link manipulator control

    Science.gov (United States)

    Tzes, Anthony; Englehart, Matthew J.; Yurkovich, Stephen

    1989-01-01

    The application of an input preshaping scheme to flexible manipulators is considered. The resulting control corresponds to a feedforward term that convolves in real-time the desired reference input with a sequence of impulses and produces a vibration free output. The robustness of the algorithm with respect to injected disturbances and modal frequency variations is not satisfactory and can be improved by convolving the input with a longer sequence of impulses. The incorporation of the preshaping scheme to a closed-loop plant, using acceleration feedback, offers satisfactory disturbance rejection due to feedback and cancellation of the flexible mode effects due to the preshaping. A frequency domain identification scheme is used to estimate the modal frequencies on-line and subsequently update the spacing between the impulses. The combined adaptive input preshaping scheme provides the fastest possible slew that results in a vibration free output.

  14. Evolution of Boolean networks under selection for a robust response to external inputs yields an extensive neutral space

    Science.gov (United States)

    Szejka, Agnes; Drossel, Barbara

    2010-02-01

    We study the evolution of Boolean networks as model systems for gene regulation. Inspired by biological networks, we select simultaneously for robust attractors and for the ability to respond to external inputs by changing the attractor. Mutations change the connections between the nodes and the update functions. In order to investigate the influence of the type of update functions, we perform our simulations with canalizing as well as with threshold functions. We compare the properties of the fitness landscapes that result for different versions of the selection criterion and the update functions. We find that for all studied cases the fitness landscape has a plateau with maximum fitness resulting in the fact that structurally very different networks are able to fulfill the same task and are connected by neutral paths in network (“genotype”) space. We find furthermore a connection between the attractor length and the mutational robustness, and an extremely long memory of the initial evolutionary stage.

  15. Visualization of virtual slave manipulator using the master input device

    International Nuclear Information System (INIS)

    Kim, S. H.; Song, T. K.; Lee, J. Y.; Yoon, J. S.

    2003-01-01

    To handle the high level radioactive materials such a spent fuel, the Master-Slave Manipulators (MSM) are widely used as a remote handling device in nuclear facilities such as the hot cell with sealed and shielded space. In this paper, the Digital Mockup which simulates the remote operation of the Advanced Conditioning Process(ACP) is developed. Also, the workspace and the motion of the slave manipulator, as well as, the remote operation task should be analyzed. The process equipment of ACP and Maintenance/Handling Device are drawn in 3D CAD models using IGRIP. Modeling device of manipulator is assigned with various mobiles attributes such as a relative position, kinematics constraints, and a range of mobility. The 3D graphic simulator using the external input device of space ball displays the movement of manipulator. To connect the external input device to the graphic simulator, the interface program of external input device with 6 DOF is deigned using the Low Level Tele-operation Interface (LLTI). The experimental result shows that the developed simulation system gives much-improved human interface characteristics and shows satisfactory response characteristics in terms of synchronization speed. This should be useful for the development of work's education system in the virtual environment

  16. Functional differential equations with unbounded delay in extrapolation spaces

    Directory of Open Access Journals (Sweden)

    Mostafa Adimy

    2014-08-01

    Full Text Available We study the existence, regularity and stability of solutions for nonlinear partial neutral functional differential equations with unbounded delay and a Hille-Yosida operator on a Banach space X. We consider two nonlinear perturbations: the first one is a function taking its values in X and the second one is a function belonging to a space larger than X, an extrapolated space. We use the extrapolation techniques to prove the existence and regularity of solutions and we establish a linearization principle for the stability of the equilibria of our equation.

  17. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  18. Regularization in global sound equalization based on effort variation

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Sarris, John; Jacobsen, Finn

    2009-01-01

    . Effort variation equalization involves modifying the conventional cost function in sound equalization, which is based on minimizing least-squares reproduction errors, by adding a term that is proportional to the squared deviations between complex source strengths, calculated independently for the sources......Sound equalization in closed spaces can be significantly improved by generating propagating waves that are naturally associated with the geometry, as, for example, plane waves in rectangular enclosures. This paper presents a control approach termed effort variation regularization based on this idea...

  19. Color correction optimization with hue regularization

    Science.gov (United States)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  20. A study on regularization parameter choice in near-field acoustical holography

    DEFF Research Database (Denmark)

    Gomes, Jesper; Hansen, Per Christian

    2008-01-01

    a regularization parameter. These parameter choice methods (PCMs) are attractive, since they require no a priori knowledge about the noise. However, there seems to be no clear understanding of when one PCM is better than the other. This paper presents comparisons of three PCMs: GCV, L-curve and Normalized......), and the Equivalent Source Method (ESM). All combinations of the PCMs and the NAH methods are investigated using simulated measurements with different types of noise added to the input. Finally, the comparisons are carried out for a practical experiment. This aim of this work is to create a better understanding...... of which mechanisms that affect the performance of the different PCMs....

  1. Universal regularization prescription for Lovelock AdS gravity

    International Nuclear Information System (INIS)

    Kofinas, Georgios; Olea, Rodrigo

    2007-01-01

    A definite form for the boundary term that produces the finiteness of both the conserved quantities and Euclidean action for any Lovelock gravity with AdS asymptotics is presented. This prescription merely tells even from odd bulk dimensions, regardless the particular theory considered, what is valid even for Einstein-Hilbert and Einstein-Gauss-Bonnet AdS gravity. The boundary term is a given polynomial of the boundary extrinsic and intrinsic curvatures (also referred to as Kounterterms series). Only the coupling constant of the boundary term changes accordingly, such that it always preserves a well-posed variational principle for boundary conditions suitable for asymptotically AdS spaces. The background-independent conserved charges associated to asymptotic symmetries are found. In odd bulk dimensions, this regularization produces a generalized formula for the vacuum energy in Lovelock AdS gravity. The standard entropy for asymptotically AdS black holes is recovered directly from the regularization of the Euclidean action, and not only from the first law of thermodynamics associated to the conserved quantities

  2. Existence, regularity and representation of solutions of time fractional wave equations

    Directory of Open Access Journals (Sweden)

    Valentin Keyantuo

    2017-09-01

    Full Text Available We study the solvability of the fractional order inhomogeneous Cauchy problem $$ \\mathbb{D}_t^\\alpha u(t=Au(t+f(t, \\quad t>0,\\;1<\\alpha\\le 2, $$ where A is a closed linear operator in some Banach space X and $f:[0,\\infty\\to X$ a given function. Operator families associated with this problem are defined and their regularity properties are investigated. In the case where A is a generator of a $\\beta$-times integrated cosine family $(C_\\beta(t$, we derive explicit representations of mild and classical solutions of the above problem in terms of the integrated cosine family. We include applications to elliptic operators with Dirichlet, Neumann or Robin type boundary conditions on $L^p$-spaces and on the space of continuous functions.

  3. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  4. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  5. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  6. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  7. A two-input sliding-mode controller for a planar arm actuated by four pneumatic muscle groups.

    Science.gov (United States)

    Lilly, John H; Quesada, Peter M

    2004-09-01

    Multiple-input sliding-mode techniques are applied to a planar arm actuated by four groups of pneumatic muscle (PM) actuators in opposing pair configuration. The control objective is end-effector tracking of a desired path in Cartesian space. The inputs to the system are commanded input pressure differentials for the two opposing PM groups. An existing model for the muscle is incorporated into the arm equations of motion to arrive at a two-input, two-output nonlinear model of the planar arm that is affine in the input and, therefore, suitable for sliding-mode techniques. Relationships between static input pressures are derived for suitable arm behavior in the absence of a control signal. Simulation studies are reported.

  8. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  9. Regular Topographic Patterning of Karst Depressions Suggests Landscape Self-Organization

    Science.gov (United States)

    Quintero, C.; Cohen, M. J.

    2017-12-01

    Thousands of wetland depressions that are commonly host to cypress domes dot the sub-tropical limestone landscape of South Florida. The origin of these depression features has been the topic of debate. Here we build upon the work of previous surveyors of this landscape to analyze the morphology and spatial distribution of depressions on the Big Cypress landscape. We took advantage of the emergence and availability of high resolution Light Direction and Ranging (LiDAR) technology and ArcMap GIS software to analyze the structure and regularity of landscape features with methods unavailable to past surveyors. Six 2.25 km2 LiDAR plots within the preserve were selected for remote analysis and one depression feature within each plot was selected for more intensive sediment and water depth surveying. Depression features on the Big Cypress landscape were found to show strong evidence of regular spatial patterning. Periodicity, a feature of regularly patterned landscapes, is apparent in both Variograms and Radial Spectrum Analyses. Size class distributions of the identified features indicate constrained feature sizes while Average Nearest Neighbor analyses support the inference of dispersed features with non-random spacing. The presence of regular patterning on this landscape strongly implies biotic reinforcement of spatial structure by way of the scale dependent feedback. In characterizing the structure of this wetland landscape we add to the growing body of work dedicated to documenting how water, life and geology may interact to shape the natural landscapes we see today.

  10. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  11. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  12. Abel transforms with low regularity with applications to x-ray tomography on spherically symmetric manifolds

    Science.gov (United States)

    de Hoop, Maarten V.; Ilmavirta, Joonas

    2017-12-01

    We study ray transforms on spherically symmetric manifolds with a piecewise C1, 1 metric. Assuming the Herglotz condition, the x-ray transform is injective on the space of L 2 functions on such manifolds. We also prove injectivity results for broken ray transforms (with and without periodicity) on such manifolds with a C1, 1 metric. To make these problems tractable in low regularity, we introduce and study a class of generalized Abel transforms and study their properties. This low regularity setting is relevant for geophysical applications.

  13. On Orthogonal Decomposition of a Sobolev Space

    OpenAIRE

    Lakew, Dejenie A.

    2016-01-01

    The theme of this short article is to investigate an orthogonal decomposition of a Sobolev space and look at some properties of the inner product therein and the distance defined from the inner product. We also determine the dimension of the orthogonal difference space and show the expansion of spaces as their regularity increases.

  14. Dynamic MRI Using SmooThness Regularization on Manifolds (SToRM).

    Science.gov (United States)

    Poddar, Sunrita; Jacob, Mathews

    2016-04-01

    We introduce a novel algorithm to recover real time dynamic MR images from highly under-sampled k- t space measurements. The proposed scheme models the images in the dynamic dataset as points on a smooth, low dimensional manifold in high dimensional space. We propose to exploit the non-linear and non-local redundancies in the dataset by posing its recovery as a manifold smoothness regularized optimization problem. A navigator acquisition scheme is used to determine the structure of the manifold, or equivalently the associated graph Laplacian matrix. The estimated Laplacian matrix is used to recover the dataset from undersampled measurements. The utility of the proposed scheme is demonstrated by comparisons with state of the art methods in multi-slice real-time cardiac and speech imaging applications.

  15. From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives

    Science.gov (United States)

    Finster, Felix

    This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.

  16. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  17. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  18. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  19. Wave dynamics of regular and chaotic rays

    International Nuclear Information System (INIS)

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space

  20. History of nutrient inputs to the northeastern United States, 1930-2000

    Science.gov (United States)

    Hale, Rebecca L.; Hoover, Joseph H.; Wollheim, Wilfred M.; Vörösmarty, Charles J.

    2013-04-01

    Humans have dramatically altered nutrient cycles at local to global scales. We examined changes in anthropogenic nutrient inputs to the northeastern United States (NE) from 1930 to 2000. We created a comprehensive time series of anthropogenic N and P inputs to 437 counties in the NE at 5 year intervals. Inputs included atmospheric N deposition, biological N2 fixation, fertilizer, detergent P, livestock feed, and human food. Exports included exports of feed and food and volatilization of ammonia. N inputs to the NE increased throughout the study period, primarily due to increases in atmospheric deposition and fertilizer. P inputs increased until 1970 and then declined due to decreased fertilizer and detergent inputs. Livestock consistently consumed the majority of nutrient inputs over time and space. The area of crop agriculture declined during the study period but consumed more nutrients as fertilizer. We found that stoichiometry (N:P) of inputs and absolute amounts of N matched nutritional needs (livestock, humans, crops) when atmospheric components (N deposition, N2 fixation) were not included. Differences between N and P led to major changes in N:P stoichiometry over time, consistent with global trends. N:P decreased from 1930 to 1970 due to increased inputs of P, and increased from 1970 to 2000 due to increased N deposition and fertilizer and decreases in P fertilizer and detergent use. We found that nutrient use is a dynamic product of social, economic, political, and environmental interactions. Therefore, future nutrient management must take into account these factors to design successful and effective nutrient reduction measures.

  1. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  2. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  3. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  4. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  5. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  6. Application of dimensional regularization to single chain polymer static properties: Conformational space renormalization of polymers. III

    International Nuclear Information System (INIS)

    Oono, Y.; Ohta, T.; Freed, K.F.

    1981-01-01

    A dimensional regularization approach to the renormalization group treatment of polymer excluded volume is formulated in chain conformation space where monomers are specified by their spatial positions and their positions along the chain and the polymers may be taken to be monodisperse. The method utilizes basic scale invariance considerations. First, it is recognized that long wavelength macroscopic descriptions must be well defined in the limit that the minimum atomic or molecular scale L is set to zero. Secondly, the microscopic theory is independent of the conveniently chosen macroscopic scale of length k. The freedom of choice of k is exploited along with the assumed renormalizability of the theory to provide the renormalization group equations which directly imply the universal scaling laws for macroscopic properties. The renormalizability of the model implies the existence of the general relations between the basic macroparameters, such as chain length, excluded volume, etc., and their microscopic counterparts in the microscopic model for the system. These macro--micro relations are defined through the condition that macroscopic quantities be well defined for polymer chains for any spatial dimensionality. The method is illustrated by calculating the end vector distribution function for all values of end vectors R. The evaluation of this distribution function currently requires the use of expansions in e = 4-d. In this case our distribution reduces to known limits for R→0 or infinity. Subsequent papers will present calculations of the polymer coherent scattering function, the monomer spatial distribution function, and concentration dependent properties

  7. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  8. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  9. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  10. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  11. Finite Word-Length Effects in Digital State-Space Filters

    Directory of Open Access Journals (Sweden)

    B. Psenicka

    1999-12-01

    Full Text Available The state-space description of digital filters involves except the relationship between input and output signals an additional set of state variables. The state-space structures of digital filters have many positive properties compared with direct canonical structures. The main advantage of digital filter structures developed using state-space technique is a smaller sensitivity to quantization effects by fixed-point implementation. In our presentation, the emphasis is on the analysis of coefficient quantization and on existence of zero-input limit cycles in state-space digital filters. The comparison with direct form II structure is presented.

  12. Space proliferation versus space-type dissemination: from semantic issues to political issues

    International Nuclear Information System (INIS)

    Gaillard-Sborowsky, Florence

    2015-01-01

    The space and ballistic capabilities relationships are regularly revisited in forums on international security, in particular about Iran and North Korea cases. The term 'space proliferation' is commonly used by analogy with nuclear proliferation. However, is this analogy relevant? Beyond the semantic aspects, this shift raises political issues that this paper will consider. The study of the assumptions regarding the analysis of nuclear and missile proliferation and their space counterparts will highlight some approximations and presuppositions, such as the amalgam between sounding rockets, launchers and missiles technologies, in order to suggest new thinking of these sensitive issues. (author)

  13. Labour input in construction of composite structures of the Balakovo NPP reactor compartment

    International Nuclear Information System (INIS)

    Alasyuk, G.Ya.

    1988-01-01

    Technical-economical results achieved when constructing the Balakovo NPP second unit reactor compartment structures are presented. The obtained data analysis shows that in the case of building the walls of non-sealed reactor compartment section in the form of composite structures the major part of labour input requirements (54-59%) falls at works on production and mounting of these structures, performed at auxiliary plants. Labour input for works performed the construction (unit-cell and space frame mounting, preparation of units for concreting, joint sealing, concrete placement) make up 41-46%, and labour input for enlarged unit-cell mounting make up 8%. Labour input per 1 m 3 of the wall structure with 0.6 and 0.9 m thicness in the monolith option are respectively by 19 an 23% higher than the same indices for composite

  14. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  15. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  16. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  17. Multi-label learning with fuzzy hypergraph regularization for protein subcellular location prediction.

    Science.gov (United States)

    Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei

    2014-12-01

    Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.

  18. C1,1 regularity for degenerate elliptic obstacle problems

    Science.gov (United States)

    Daskalopoulos, Panagiota; Feehan, Paul M. N.

    2016-03-01

    The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.

  19. Some regularities in invertebrate succession in different microhabitats on pine stumps

    OpenAIRE

    Franch, Joan

    1989-01-01

    Sixty eight pine stumps felled on known dates from one to sixteen years before the moment of sampling have been studied in the San Juan de la Peña woodland (province of Huesca). Four microhabitats were distinguished: bark, subcortical space, sapwood and heartwood. The object of the study is to compare the invertebrate macrofauna succession of the different microhabitats in order to find regularities among them. The biocenosis has not been completely studied: ipidae, diptera and annelidae are ...

  20. Initiating and maintaining recreational walking: a longitudinal study on the influence of neighborhood green space.

    Science.gov (United States)

    Sugiyama, Takemi; Giles-Corti, Billie; Summers, Jacqui; du Toit, Lorinne; Leslie, Eva; Owen, Neville

    2013-09-01

    This study examined prospective relationships of green space attributes with adults initiating or maintaining recreational walking. Postal surveys were completed by 1036 adults living in Adelaide, Australia, at baseline (two time points in 2003-04) and follow-up (2007-08). Initiating or maintaining recreational walking was determined using self-reported walking frequency. Green space attributes examined were perceived presence, quality, proximity, and the objectively measured area (total and largest) and number of green spaces within a 1.6 km buffer drawn from the center of each study neighborhood. Multilevel regression analyses examined the odds of initiating or maintaining walking separately for each green space attribute. At baseline, participants were categorized into non-regular (n = 395), regular (n = 286), and irregular walkers (n = 313). Among non-regular walkers, 30% had initiated walking, while 70% of regular walkers had maintained walking at follow-up. No green space attributes were associated with initiating walking. However, positive perceptions of the presence of and proximity to green spaces and the total and largest areas of green space were significantly associated with a higher likelihood of walking maintenance over four years. Neighborhood green spaces may not assist adults to initiate walking, but their presence and proximity may facilitate them to maintain recreational walking over time. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Application of Littlewood-Paley decomposition to the regularity of Boltzmann type kinetic equations

    International Nuclear Information System (INIS)

    EL Safadi, M.

    2007-03-01

    We study the regularity of kinetic equations of Boltzmann type.We use essentially Littlewood-Paley method from harmonic analysis, consisting mainly in working with dyadics annulus. We shall mainly concern with the homogeneous case, where the solution f(t,x,v) depends only on the time t and on the velocities v, while working with realistic and singular cross-sections (non cutoff). In the first part, we study the particular case of Maxwellian molecules. Under this hypothesis, the structure of the Boltzmann operator and his Fourier transform write in a simple form. We show a global C ∞ regularity. Then, we deal with the case of general cross-sections with 'hard potential'. We are interested in the Landau equation which is limit equation to the Boltzmann equation, taking in account grazing collisions. We prove that any weak solution belongs to Schwartz space S. We demonstrate also a similar regularity for the case of Boltzmann equation. Let us note that our method applies directly for all dimensions, and proofs are often simpler compared to other previous ones. Finally, we finish with Boltzmann-Dirac equation. In particular, we adapt the result of regularity obtained in Alexandre, Desvillettes, Wennberg and Villani work, using the dissipation rate connected with Boltzmann-Dirac equation. (author)

  2. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  3. Design and evaluation of nonverbal sound-based input for those with motor handicapped.

    Science.gov (United States)

    Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong

    2013-03-01

    Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.

  4. Geometry on the space of geometries

    International Nuclear Information System (INIS)

    Christodoulakis, T.; Zanelli, J.

    1988-06-01

    We discuss the geometric structure of the configuration space of pure gravity. This is an infinite dimensional manifold, M, where each point represents one spatial geometry g ij (x). The metric on M is dictated by geometrodynamics, and from it, the Christoffel symbols and Riemann tensor can be found. A ''free geometry'' tracing a geodesic on the manifold describes the time evolution of space in the strong gravity limit. In a regularization previously introduced by the authors, it is found that M does not have the same dimensionality, D, everywhere, and that D is not a scalar, although it is covariantly constant. In this regularization, it is seen that the path integral measure can be absorbed in a renormalization of the cosmological constant. (author). 19 refs

  5. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  6. Socio-Economic Impacts of Space Weather and User Needs for Space Weather Information

    Science.gov (United States)

    Worman, S. L.; Taylor, S. M.; Onsager, T. G.; Adkins, J. E.; Baker, D. N.; Forbes, K. F.

    2017-12-01

    The 2015 National Space Weather Strategy and Space Weather Action Plan (SWAP) details the activities, outcomes, and timelines to build a "Space Weather Ready Nation." NOAA's Space Weather Prediction Center and Abt Associates are working together on two SWAP initiatives: (1) identifying, describing, and quantifying the socio-economic impacts of moderate and severe space weather; and (2) outreach to engineers and operators to better understand user requirements for space weather products and services. Both studies cover four technological sectors (electric power, commercial aviation, satellites, and GNSS users) and rely heavily on industry input. Findings from both studies are essential for decreasing vulnerabilities and enhancing preparedness.

  7. Modelling the Flow Stress of Alloy 316L using a Multi-Layered Feed Forward Neural Network with Bayesian Regularization

    Science.gov (United States)

    Abiriand Bhekisipho Twala, Olufunminiyi

    2017-08-01

    In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.

  8. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  9. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  10. Using random response input in Ibrahim Time Domain

    DEFF Research Database (Denmark)

    Olsen, Peter; Brincker, R.

    2013-01-01

    In this paper the time domain technique Ibrahim Time Domain (ITD) is used to analyze random time data. ITD is known to be a technique for identification of output only systems. The traditional formulation of ITD is claimed to be limited, when identifying closely spaced modes, because....... In this article it is showed that when using the modified ITD random time data can be analyzed. The application of the technique is displayed by a case study, with simulations and experimental data....... of the technique being Single Input Multiple Output (SIMO). It has earlier been showed that when modifying ITD with Toeplitz matrix averaging. Identification of time data with closely spaced modes is improved. In the traditional formulation of ITD the time data has to be free decays or impulse response functions...

  11. Identifying the relevant dependencies of the neural network response on characteristics of the input space

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    This talk presents an approach to identify those characteristics of the neural network inputs that are most relevant for the response and therefore provides essential information to determine the systematic uncertainties.

  12. Diffusion of charged particles in strong large-scale random and regular magnetic fields

    International Nuclear Information System (INIS)

    Mel'nikov, Yu.P.

    2000-01-01

    The nonlinear collision integral for the Green's function averaged over a random magnetic field is transformed using an iteration procedure taking account of the strong random scattering of particles on the correlation length of the random magnetic field. Under this transformation the regular magnetic field is assumed to be uniform at distances of the order of the correlation length. The single-particle Green's functions of the scattered particles in the presence of a regular magnetic field are investigated. The transport coefficients are calculated taking account of the broadening of the cyclotron and Cherenkov resonances as a result of strong random scattering. The mean-free path lengths parallel and perpendicular to the regular magnetic field are found for a power-law spectrum of the random field. The analytical results obtained are compared with the experimental data on the transport ranges of solar and galactic cosmic rays in the interplanetary magnetic field. As a result, the conditions for the propagation of cosmic rays in the interplanetary space and a more accurate idea of the structure of the interplanetary magnetic field are determined

  13. Learning About Time Within the Spinal Cord II: Evidence that Temporal Regularity is Encoded by a Spinal Oscillator

    Directory of Open Access Journals (Sweden)

    Kuan Hsien Lee

    2016-02-01

    Full Text Available How a stimulus impacts spinal cord function depends upon temporal relations. When intermittent noxious stimulation (shock is applied and the interval between shock pulses is varied (unpredictable, it induces a lasting alteration that inhibits adaptive learning. If the same stimulus is applied in a temporally regular (predictable manner, the capacity to learn is preserved and a protective/restorative effect is engaged that counters the adverse effect of variable stimulation. Sensitivity to temporal relations implies a capacity to encode time. This study explores how spinal neurons discriminate variable and fixed spaced stimulation. Communication with the brain was blocked by means of a spinal transection and adaptive capacity was tested using an instrumental learning task. In this task, subjects must learn to maintain a hind limb in a flexed position to minimize shock exposure. To evaluate the possibility that a distinct class of afferent fibers provide a sensory cue for regularity, we manipulated the temporal relation between shocks given to two dermatomes (leg and tail. Evidence for timing emerged when the stimuli were applied in a coherent manner across dermatomes, implying that a central (spinal process detects regularity. Next, we show that fixed spaced stimulation has a restorative effect when half the physical stimuli are randomly omitted, as long as the stimuli remain in phase, suggesting that stimulus regularity is encoded by an internal oscillator Research suggests that the oscillator that drives the tempo of stepping depends upon neurons within the rostral lumbar (L1-L2 region. Disrupting communication with the L1-L2 tissue by means of a L3 transection eliminated the restorative effect of fixed spaced stimulation. Implications of the results for step training and rehabilitation after injury are discussed.

  14. A controls engineering approach for analyzing airplane input-output characteristics

    Science.gov (United States)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  15. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  16. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  17. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  18. Statistical learning is constrained to less abstract patterns in complex sensory input (but not the least).

    Science.gov (United States)

    Emberson, Lauren L; Rubinstein, Dani Y

    2016-08-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1-dog1, bird2-dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1-dog_picture1, bird_picture2-dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual

  19. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  20. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  1. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    Science.gov (United States)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  2. Chiral Thirring–Wess model with Faddeevian regularization

    International Nuclear Information System (INIS)

    Rahaman, Anisur

    2015-01-01

    Replacing vector type of interaction of the Thirring–Wess model by the chiral type a new model is presented which is termed here as chiral Thirring–Wess model. Ambiguity parameters of regularization are so chosen that the model falls into the Faddeevian class. The resulting Faddeevian class of model in general does not possess Lorentz invariance. However we can exploit the arbitrariness admissible in the ambiguity parameters to relate the quantum mechanically generated ambiguity parameters with the classical parameter involved in the masslike term of the gauge field which helps to maintain physical Lorentz invariance instead of the absence of manifestly Lorentz covariance of the model. The phase space structure and the theoretical spectrum of this class of model have been determined through Dirac’s method of quantization of constraint system

  3. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  4. Modelling of air-conditioned and heated spaces

    Energy Technology Data Exchange (ETDEWEB)

    Moehl, U

    1987-01-01

    A space represents a complex system involving numerous components, manipulated variables and disturbances which need to be described if dynamic behaviour of space air is to be determined. A justifiable amount of simulation input is determined by the application of adjusted modelling of the individual components. The determination of natural air exchange in heated spaces and of space-air flow in air-conditioned space are a primary source of uncertainties. (orig.).

  5. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  6. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  7. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  8. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  9. FSH: fast spaced seed hashing exploiting adjacent hashes.

    Science.gov (United States)

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  10. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  11. Quantum magnification of classical sub-Planck phase space features

    International Nuclear Information System (INIS)

    Hensinger, W.K.; Heckenberg, N.; Rubinsztein-Dunlop, H.; Delande, D.

    2002-01-01

    Full text: To understand the relationship between quantum mechanics and classical physics a crucial question to be answered is how distinct classical dynamical phase space features translate into the quantum picture. This problem becomes even more interesting if these phase space features occupy a much smaller volume than ℎ in a phase space spanned by two non-commuting variables such as position and momentum. The question whether phase space structures in quantum mechanics associated with sub-Planck scales have physical signatures has recently evoked a lot of discussion. Here we will show that sub-Planck classical dynamical phase space structures, for example regions of regular motion, can give rise to states whose phase space representation is of size ℎ or larger. This is illustrated using period-1 regions of regular motion (modes of oscillatory motion of a particle in a modulated well) whose volume is distinctly smaller than Planck's constant. They are magnified in the quantum picture and appear as states whose phase space representation is of size h or larger. Cold atoms provide an ideal test bed to probe such fundamental aspects of quantum and classical dynamics. In the experiment a Bose-Einstein condensate is loaded into a far detuned optical lattice. The lattice depth is modulated resulting in the emergence of regions of regular motion surrounded by chaotic motion in the phase space spanned by position and momentum of the atoms along the standing wave. Sub-Planck scaled phase space features in the classical phase space are magnified and appear as distinct broad peaks in the atomic momentum distribution. The corresponding quantum analysis shows states of size Ti which can be associated with much smaller classical dynamical phase space features. This effect may considered as the dynamical equivalent of the Goldstone and Jaffe theorem which predicts the existence of at least one bound state at a bend in a two or three dimensional spatial potential

  12. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  13. Generalisation for regular black holes on general relativity to f(R) gravity

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Manuel E. [Universidade Federal do Para Campus Universitario de Abaetetuba, Faculdade de Ciencias Exatas e Tecnologia, Abaetetuba, Para (Brazil); Universidade Federal do Para, Faculdade de Fisica, PPGF, Belem, Para (Brazil); Fabris, Julio C. [Universidade Federal do Espirito Santo, Vitoria, ES (Brazil); National Research Nuclear University MEPhI, Moscow (Russian Federation); Junior, Ednaldo L.B. [Universidade Federal do Para, Faculdade de Fisica, PPGF, Belem, Para (Brazil); Universidade Federal do Para, Campus Universitario de Tucurui, Faculdade de Engenharia da Computacao, Tucurui, Para (Brazil); Marques, Glauber T. [Universidade Federal Rural da Amazonia ICIBE - LASIC, Belem, PA (Brazil)

    2016-05-15

    IIn this paper, we determine regular black hole solutions using a very general f(R) theory, coupled to a nonlinear electromagnetic field given by a Lagrangian L{sub NED}. The functions f(R) and L{sub NED} are in principle left unspecified. Instead, the model is constructed through a choice of the mass function M(r) presented in the metric coefficients. Solutions which have a regular behaviour of the geometric invariants are found. These solutions have two horizons, the event horizon and the Cauchy horizon. All energy conditions are satisfied in the whole space-time, except the strong energy condition (SEC), which is violated near the Cauchy horizon.We present also a new theorem related to the energy conditions in f(R) gravity, re-obtaining the well-known conditions in the context of general relativity when the geometry of the solution is the same. (orig.)

  14. Ito's formula in UMD Banach spaces and regularity of solution of the Zakai equation

    NARCIS (Netherlands)

    Brzezniak, Z.; Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.

    2008-01-01

    Using the theory of stochastic integration for processes with values in a UMD Banach space developed recently by the authors, an Itô formula is proved which is applied to prove the existence of strong solutions for a class of stochastic evolution equations in UMD Banach spaces. The abstract results

  15. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  16. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  17. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  18. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  19. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  20. Dose domain regularization of MLC leaf patterns for highly complex IMRT plans

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-04-15

    Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and

  1. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  2. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  3. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  4. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    International Nuclear Information System (INIS)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-01-01

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise

  5. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  6. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  7. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  8. Space Mathematics, A Resource for Teachers Outlining Supplementary Space-Related Problems in Mathematics.

    Science.gov (United States)

    Reynolds, Thomas D.; And Others

    This compilation of 138 problems illustrating applications of high school mathematics to various aspects of space science is intended as a resource from which the teacher may select questions to supplement his regular course. None of the problems require a knowledge of calculus or physics, and solutions are presented along with the problem…

  9. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  10. Influence of regular proprioceptive and bioenergetic physical activities on balance control in elderly women.

    Science.gov (United States)

    Gauchard, Gérome C; Gangloff, Pierre; Jeandel, Claude; Perrin, Philippe P

    2003-09-01

    Balance disorders increase considerably with age due to a decrease in posture regulation quality, and are accompanied by a higher risk of falling. Conversely, physical activities have been shown to improve the quality of postural control in elderly individuals and decrease the number of falls. The aim of this study was to evaluate the impact of two types of exercise on the visual afferent and on the different parameters of static balance regulation. Static postural control was evaluated in 44 healthy women aged over 60 years. Among them, 15 regularly practiced proprioceptive physical activities (Group I), 12 regularly practiced bioenergetic physical activities (Group II), and 18 controls walked on a regular basis (Group III). Group I participants displayed lower sway path and area values, whereas Group III participants displayed the highest, both in eyes-open and eyes-closed conditions. Group II participants displayed intermediate values, close to those of Group I in the eyes-open condition and those of Group III in the eyes-closed condition. Visual afferent contribution was more pronounced for Group II and III participants than for Group I participants. Proprioceptive exercise appears to have the best impact on balance regulation and precision. Besides, even if bioenergetic activity improves postural control in simple postural tasks, more difficult postural tasks show that this type of activity does not develop a neurosensorial proprioceptive input threshold as well, probably on account of the higher contribution of visual afferent.

  11. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  12. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  13. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  14. The patterning of retinal horizontal cells: normalizing the regularity index enhances the detection of genomic linkage

    Directory of Open Access Journals (Sweden)

    Patrick W. Keeley

    2014-10-01

    Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.

  15. A variational regularization of Abel transform for GPS radio occultation

    Directory of Open Access Journals (Sweden)

    T.-K. Wee

    2018-04-01

    Full Text Available In the Global Positioning System (GPS radio occultation (RO technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the

  16. A variational regularization of Abel transform for GPS radio occultation

    Science.gov (United States)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity

  17. Characteristic features of determining the labor input and estimated cost of the development and manufacture of equipment

    Science.gov (United States)

    Kurmanaliyev, T. I.; Breslavets, A. V.

    1974-01-01

    The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.

  18. A regularized approach for geodesic-based semisupervised multimanifold learning.

    Science.gov (United States)

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  19. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  20. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    Science.gov (United States)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  1. Inventing a space mission the story of the Herschel space observatory

    CERN Document Server

    Minier, Vincent; Bontems, Vincent; de Graauw, Thijs; Griffin, Matt; Helmich, Frank; Pilbratt, Göran; Volonte, Sergio

    2017-01-01

    This book describes prominent technological achievements within a very successful space science mission: the Herschel space observatory. Focusing on the various processes of innovation it offers an analysis and discussion of the social, technological and scientific context of the mission that paved the way to its development. It addresses the key question raised by these processes in our modern society, i.e.: how knowledge management of innovation set the conditions for inventing the future? In that respect the book is based on a transdisciplinary analysis of the programmatic complexity of Herschel, with inputs from space scientists, managers, philosophers, and engineers. This book is addressed to decision makers, not only in space science, but also in other industries and sciences using or building large machines. It is also addressed to space engineers and scientists as well as students in science and management.

  2. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  3. Strong self-coupling expansion in the lattice-regularized standard SU(2) Higgs model

    International Nuclear Information System (INIS)

    Decker, K.; Weisz, P.; Montvay, I.

    1985-11-01

    Expectation values at an arbitrary point of the 3-dimensional coupling parameter space in the lattice-regularized SU(2) Higgs-model with a doublet scalar field are expressed by a series of expectation values at infinite self-coupling (lambda=infinite). Questions of convergence of this 'strong self-coupling expansion' (SSCE) are investigated. The SSCE is a potentially useful tool for the study of the lambda-dependence at any value (zero or non-zero) of the bare gauge coupling. (orig.)

  4. Strong self-coupling expansion in the lattice-regularized standard SU(2) Higgs model

    International Nuclear Information System (INIS)

    Decker, K.; Weisz, P.

    1986-01-01

    Expectation values at an arbitrary point of the 3-dimensional coupling parameter space in the lattice-regularized SU(2) Higgs model with a doublet scalar field are expressed by a series of expectation values at infinite self-coupling (lambda=infinite). Questions of convergence of this ''strong self-coupling expansion'' (SSCE) are investigated. The SSCE is a potentially useful tool for the study of the lambda-dependence at any value (zero or non-zero) of the bare gauge coupling. (orig.)

  5. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  6. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  7. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  8. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  9. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  10. Total dose induced increase in input offset voltage in JFET input operational amplifiers

    International Nuclear Information System (INIS)

    Pease, R.L.; Krieg, J.; Gehlhausen, M.; Black, J.

    1999-01-01

    Four different types of commercial JFET input operational amplifiers were irradiated with ionizing radiation under a variety of test conditions. All experienced significant increases in input offset voltage (Vos). Microprobe measurement of the electrical characteristics of the de-coupled input JFETs demonstrates that the increase in Vos is a result of the mismatch of the degraded JFETs. (authors)

  11. Differential regularization of a non-relativistic anyon model

    International Nuclear Information System (INIS)

    Freedman, D.Z.; Rius, N.

    1993-07-01

    Differential regularization is applied to a field theory of a non-relativistic charged boson field φ with λ(φ * φ) 2 self-interaction and coupling to a statistics-changing 0(1) Chern-Simons gauge field. Renormalized configuration-space amplitudes for all diagrams contributing to the φ * φ * φφ 4-point function, which is the only primitively divergent Green's function, are obtained up to 3-loop order. The renormalization group equations are explicitly checked, and the scheme dependence of the β-function is investigated. If the renormalization scheme is fixed to agree with a previous 1-loop calculation, the 2- and 3-loop contributions to β(λ, e) vanish, and β(λ, ε) itself vanishes when the ''self-dual'' condition relating λ to the gauge coupling e is imposed. (author). 12 refs, 1 fig

  12. A video of Mixed Interaction Space video

    DEFF Research Database (Denmark)

    Lykke, Olesen, Andreas; Hansen, Thomas Riisgaard; Eriksson, Eva

    Mixed Interaction Space is a new concept that uses the mobile phone to interact with either applications on the phone or in the environment by tracking the position and rotation with the camera in 4 dimmension. Most mobile devices today has a camera onboard. In the project about Mixed Interaction...... Spaces we use image processing algorithms to track the movement of the mobile phone according to a fixed point and use this information as input to different applications. We are able to track the movement of the device in 3D plus the rotation of the device and uses these information as a kind of four...... dimensional input device. As a fixed point we use a circle in the first version of Mixis. By tracking the circle we have developed a number of applications that uses this technique as input. Above is three examples....

  13. Quantum mechanics on Laakso spaces

    Science.gov (United States)

    Kauffman, Christopher J.; Kesler, Robert M.; Parshall, Amanda G.; Stamey, Evelyn A.; Steinhurst, Benjamin A.

    2012-04-01

    We first review the spectrum of the Laplacian operator on a general Laakso space before considering modified Hamiltonians for the infinite square well, parabola, and Coulomb potentials. Additionally, we compute the spectrum for the Laplacian and its multiplicities when certain regions of a Laakso space are compressed or stretched and calculate the Casimir force experienced by two uncharged conducting plates by imposing physically relevant boundary conditions and then analytically regularizing the resulting zeta function. Lastly, we derive a general formula for the spectral zeta function and its derivative for Laakso spaces with strict self-similar structure before listing explicit spectral values for some special cases

  14. Regularized κ-distributions with non-diverging moments

    Science.gov (United States)

    Scherer, K.; Fichtner, H.; Lazar, M.

    2017-12-01

    For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.

  15. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  16. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  17. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  18. Graph Regularized Meta-path Based Transductive Regression in Heterogeneous Information Network.

    Science.gov (United States)

    Wan, Mengting; Ouyang, Yunbo; Kaplan, Lance; Han, Jiawei

    2015-01-01

    A number of real-world networks are heterogeneous information networks, which are composed of different types of nodes and links. Numerical prediction in heterogeneous information networks is a challenging but significant area because network based information for unlabeled objects is usually limited to make precise estimations. In this paper, we consider a graph regularized meta-path based transductive regression model ( Grempt ), which combines the principal philosophies of typical graph-based transductive classification methods and transductive regression models designed for homogeneous networks. The computation of our method is time and space efficient and the precision of our model can be verified by numerical experiments.

  19. On the interplay of basis smoothness and specific range conditions occurring in sparsity regularization

    International Nuclear Information System (INIS)

    Anzengruber, Stephan W; Hofmann, Bernd; Ramlau, Ronny

    2013-01-01

    The convergence rates results in ℓ 1 -regularization when the sparsity assumption is narrowly missed, presented by Burger et al (2013 Inverse Problems 29 025013), are based on a crucial condition which requires that all basis elements belong to the range of the adjoint of the forward operator. Partly it was conjectured that such a condition is very restrictive. In this context, we study sparsity-promoting varieties of Tikhonov regularization for linear ill-posed problems with respect to an orthonormal basis in a separable Hilbert space using ℓ 1 and sublinear penalty terms. In particular, we show that the corresponding range condition is always satisfied for all basis elements if the problems are well-posed in a certain weaker topology and the basis elements are chosen appropriately related to an associated Gelfand triple. The Radon transform, Symm’s integral equation and linear integral operators of Volterra type are examples for such behaviour, which allows us to apply convergence rates results for non-sparse solutions, and we further extend these results also to the case of non-convex ℓ q -regularization with 0 < q < 1. (paper)

  20. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  1. Stargate GTM: Bridging Descriptor and Activity Spaces.

    Science.gov (United States)

    Gaspar, Héléna A; Baskin, Igor I; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2015-11-23

    Predicting the activity profile of a molecule or discovering structures possessing a specific activity profile are two important goals in chemoinformatics, which could be achieved by bridging activity and molecular descriptor spaces. In this paper, we introduce the "Stargate" version of the Generative Topographic Mapping approach (S-GTM) in which two different multidimensional spaces (e.g., structural descriptor space and activity space) are linked through a common 2D latent space. In the S-GTM algorithm, the manifolds are trained simultaneously in two initial spaces using the probabilities in the 2D latent space calculated as a weighted geometric mean of probability distributions in both spaces. S-GTM has the following interesting features: (1) activities are involved during the training procedure; therefore, the method is supervised, unlike conventional GTM; (2) using molecular descriptors of a given compound as input, the model predicts a whole activity profile, and (3) using an activity profile as input, areas populated by relevant chemical structures can be detected. To assess the performance of S-GTM prediction models, a descriptor space (ISIDA descriptors) of a set of 1325 GPCR ligands was related to a B-dimensional (B = 1 or 8) activity space corresponding to pKi values for eight different targets. S-GTM outperforms conventional GTM for individual activities and performs similarly to the Lasso multitask learning algorithm, although it is still slightly less accurate than the Random Forest method.

  2. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  3. OpenAnalogInput(): Hybrid Spaces, Self-Making and Power in the Internet of Things

    Science.gov (United States)

    Duarte, Fernanda da Costa Portugal

    2015-01-01

    This dissertation investigates how the emergence of the Internet of Things and the embeddedness of sensors and networked connectivity onto things, physical spaces and biological bodies rearticulates embodied spaces, devises practices of self-making and forms of power in the governance of the self and society. (Abstract shortened by ProQuest.).…

  4. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  5. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  6. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  7. Quantum effects in non-maximally symmetric spaces

    International Nuclear Information System (INIS)

    Shen, T.C.

    1985-01-01

    Non-Maximally symmetric spaces provide a more general background to explore the relation between the geometry of the manifold and the quantum fields defined in the manifold than those with maximally symmetric spaces. A static Taub universe is used to study the effect of curvature anisotropy on the spontaneous symmetry breaking of a self-interacting scalar field. The one-loop effective potential on a λphi 4 field with arbitrary coupling xi is computed by zeta function regularization. For massless minimal coupled scalar fields, first order phase transitions can occur. Keeping the shape invariant but decreasing the curvature radius of the universe induces symmetry breaking. If the curvature radius is held constant, increasing deformation can restore the symmetry. Studies on the higher-dimensional Kaluza-Klein theories are also focused on the deformation effect. Using the dimensional regularization, the effective potential of the free scalar fields in M 4 x T/sup N/ and M 4 x (Taub) 3 spaces are obtained. The stability criterions for the static solutions of the self-consistent Einstein equations are derived. Stable solutions of the M 4 x S/sup N/ topology do not exist. With the Taub space as the internal space, the gauge coupling constants of SU(2), and U(1) can be determined geometrically. The weak angle is therefore predicted by geometry in this model

  8. Regular black holes from semi-classical down to Planckian size

    Science.gov (United States)

    Spallucci, Euro; Smailagic, Anais

    In this paper, we review various models of curvature singularity free black holes (BHs). In the first part of the review, we describe semi-classical solutions of the Einstein equations which, however, contains a “quantum” input through the matter source. We start by reviewing the early model by Bardeen where the metric is regularized by-hand through a short-distance cutoff, which is justified in terms of nonlinear electro-dynamical effects. This toy-model is useful to point-out the common features shared by all regular semi-classical black holes. Then, we solve Einstein equations with a Gaussian source encoding the quantum spread of an elementary particle. We identify, the a priori arbitrary, Gaussian width with the Compton wavelength of the quantum particle. This Compton-Gauss model leads to the estimate of a terminal density that a gravitationally collapsed object can achieve. We identify this density to be the Planck density, and reformulate the Gaussian model assuming this as its peak density. All these models, are physically reliable as long as the BH mass is big enough with respect to the Planck mass. In the truly Planckian regime, the semi-classical approximation breaks down. In this case, a fully quantum BH description is needed. In the last part of this paper, we propose a nongeometrical quantum model of Planckian BHs implementing the Holographic Principle and realizing the “classicalization” scenario recently introduced by Dvali and collaborators. The classical relation between the mass and radius of the BH emerges only in the classical limit, far away from the Planck scale.

  9. Reversibility and the structure of the local state space

    International Nuclear Information System (INIS)

    Al-Safi, Sabri W; Richens, Jonathan

    2015-01-01

    The richness of quantum theory’s reversible dynamics is one of its unique operational characteristics, with recent results suggesting deep links between the theory’s reversible dynamics, its local state space and the degree of non-locality it permits. We explore the delicate interplay between these features, demonstrating that reversibility places strong constraints on both the local and global state space. Firstly, we show that all reversible dynamics are trivial (composed of local transformations and permutations of subsytems) in maximally non-local theories whose local state spaces satisfy a dichotomy criterion; this applies to a range of operational models that have previously been studied, such as d-dimensional ‘hyperballs’ and almost all regular polytope systems. By separately deriving a similar result for odd-sided polygons, we show that classical systems are the only regular polytope state spaces whose maximally non-local composites allow for non-trivial reversible dynamics. Secondly, we show that non-trivial reversible dynamics do exist in maximally non-local theories whose state spaces are reducible into two or more smaller spaces. We conjecture that this is a necessary condition for the existence of such dynamics, but that reversible entanglement generation remains impossible even in this scenario. (paper)

  10. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Michael Gillham

    2018-05-01

    Full Text Available Background: Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods: A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results: The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users’ changing ability over time whilst driving a powered wheelchair. Conclusions: Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system.

  11. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  12. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  13. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  14. Input and execution

    International Nuclear Information System (INIS)

    Carr, S.; Lane, G.; Rowling, G.

    1986-11-01

    This document describes the input procedures, input data files and operating instructions for the SYVAC A/C 1.03 computer program. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  15. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  16. Longitudinal Phase Space Tomography with Space Charge

    CERN Document Server

    Hancock, S; Lindroos, M

    2000-01-01

    Tomography is now a very broad topic with a wealth of algorithms for the reconstruction of both qualitative and quantitative images. In an extension in the domain of particle accelerators, one of the simplest algorithms has been modified to take into account the non-linearity of large-amplitude synchrotron motion. This permits the accurate reconstruction of longitudinal phase space density from one-dimensional bunch profile data. The method is a hybrid one which incorporates particle tracking. Hitherto, a very simple tracking algorithm has been employed because only a brief span of measured profile data is required to build a snapshot of phase space. This is one of the strengths of the method, as tracking for relatively few turns relaxes the precision to which input machine parameters need to be known. The recent addition of longitudinal space charge considerations as an optional refinement of the code is described. Simplicity suggested an approach based on the derivative of bunch shape with the properties of...

  17. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  18. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  19. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  20. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  1. Topological Structures on DMC Spaces

    Directory of Open Access Journals (Sweden)

    Rajai Nasser

    2018-05-01

    Full Text Available Two channels are said to be equivalent if they are degraded from each other. The space of equivalent channels with input alphabet X and output alphabet Y can be naturally endowed with the quotient of the Euclidean topology by the equivalence relation. A topology on the space of equivalent channels with fixed input alphabet X and arbitrary but finite output alphabet is said to be natural if and only if it induces the quotient topology on the subspaces of equivalent channels sharing the same output alphabet. We show that every natural topology is σ -compact, separable and path-connected. The finest natural topology, which we call the strong topology, is shown to be compactly generated, sequential and T 4 . On the other hand, the strong topology is not first-countable anywhere, hence it is not metrizable. We introduce a metric distance on the space of equivalent channels which compares the noise levels between channels. The induced metric topology, which we call the noisiness topology, is shown to be natural. We also study topologies that are inherited from the space of meta-probability measures by identifying channels with their Blackwell measures.

  2. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  3. Supersymmetric black holes with lens-space topology.

    Science.gov (United States)

    Kunduri, Hari K; Lucietti, James

    2014-11-21

    We present a new supersymmetric, asymptotically flat, black hole solution to five-dimensional supergravity. It is regular on and outside an event horizon of lens-space topology L(2,1). It is the first example of an asymptotically flat black hole with lens-space topology. The solution is characterized by a charge, two angular momenta, and a magnetic flux through a noncontractible disk region ending on the horizon, with one constraint relating these.

  4. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  5. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  6. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  7. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; AbdulJabbar, Mustafa Abdulmajeed

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  8. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  9. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  10. User input verification and test driven development in the NJOY21 nuclear data processing code

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-21

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, and capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.

  11. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  12. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  13. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  14. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  15. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  16. Critical spaces for quasilinear parabolic evolution equations and applications

    Science.gov (United States)

    Prüss, Jan; Simonett, Gieri; Wilke, Mathias

    2018-02-01

    We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.

  17. Regularity for 3D Navier-Stokes equations in terms of two components of the vorticity

    Directory of Open Access Journals (Sweden)

    Sadek Gala

    2010-10-01

    Full Text Available We establish regularity conditions for the 3D Navier-Stokes equation via two components of the vorticity vector. It is known that if a Leray-Hopf weak solution $u$ satisfies $$ ilde{omega}in L^{2/(2-r}(0,T;L^{3/r}(mathbb{R}^3quad hbox{with }0regularity of Leray-Hopf weak solution $u$ under each of the following two (weaker conditions: $$displaylines{ ilde{omega}in L^{2/(2-r}(0,T;dot {mathcal{M}}_{2, 3/r}(mathbb{R}^3quad hbox{for }0space. Since $L^{3/r}(mathbb{R}^3$ is a proper subspace of $dot {mathcal{M}}_{2,3/r}(mathbb{R}^3$, our regularity criterion improves the results in Chae-Choe [5].

  18. SSYST-3. Input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1983-12-01

    The code system SSYST-3 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a complete input-list for all modules and several tested inputs for a LOCA analysis. (orig.)

  19. Input current interharmonics in adjustable speed drives caused by fixed-frequency modulation techniques

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Loh, Poh Chiang

    2016-01-01

    Adjustable Speed Drives (ASDs) based on double-stage conversion systems may inject interharmonics distortion into the grid, other than the well-known characteristic harmonic components. The problems created by interharmonics make it necessary to find their precise sources, and, to adopt an approp......Adjustable Speed Drives (ASDs) based on double-stage conversion systems may inject interharmonics distortion into the grid, other than the well-known characteristic harmonic components. The problems created by interharmonics make it necessary to find their precise sources, and, to adopt...... an appropriate strategy for minimizing their effects. This paper investigates the ASD's input current interharmonic sources caused by applying symmetrical regularly sampled fixed-frequency modulation techniques on the inverter. The interharmonics generation process is precisely formulated and comparative results...

  20. Material input of nuclear fuel

    International Nuclear Information System (INIS)

    Rissanen, S.; Tarjanne, R.

    2001-01-01

    The Material Input (MI) of nuclear fuel, expressed in terms of the total amount of natural material needed for manufacturing a product, is examined. The suitability of the MI method for assessing the environmental impacts of fuels is also discussed. Material input is expressed as a Material Input Coefficient (MIC), equalling to the total mass of natural material divided by the mass of the completed product. The material input coefficient is, however, only an intermediate result, which should not be used as such for the comparison of different fuels, because the energy contents of nuclear fuel is about 100 000-fold compared to the energy contents of fossil fuels. As a final result, the material input is expressed in proportion to the amount of generated electricity, which is called MIPS (Material Input Per Service unit). Material input is a simplified and commensurable indicator for the use of natural material, but because it does not take into account the harmfulness of materials or the way how the residual material is processed, it does not alone express the amount of environmental impacts. The examination of the mere amount does not differentiate between for example coal, natural gas or waste rock containing usually just sand. Natural gas is, however, substantially more harmful for the ecosystem than sand. Therefore, other methods should also be used to consider the environmental load of a product. The material input coefficient of nuclear fuel is calculated using data from different types of mines. The calculations are made among other things by using the data of an open pit mine (Key Lake, Canada), an underground mine (McArthur River, Canada) and a by-product mine (Olympic Dam, Australia). Furthermore, the coefficient is calculated for nuclear fuel corresponding to the nuclear fuel supply of Teollisuuden Voima (TVO) company in 2001. Because there is some uncertainty in the initial data, the inaccuracy of the final results can be even 20-50 per cent. The value

  1. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    NARCIS (Netherlands)

    Leus, G.; Petré, F.; Moonen, M.

    2004-01-01

    In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input

  2. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  3. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  4. Chemical sensors are hybrid-input memristors

    Science.gov (United States)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  5. Declaration of input sources in scientific research: should this practice be incorporated to organizational information management?

    Directory of Open Access Journals (Sweden)

    José Osvaldo De Sordi

    Full Text Available This research studies the declaration of input sources for research in scientific communications, more specifically, whether this practice of the academy may be considered a good example to be followed by organizations. Seven hypotheses address two dimensions of input sources: origin (primary or secondary and nature (data or information. It appears that the declaration of research inputs in the academy is problematic, mostly incomplete or inaccurate. This does not reduce the importance of this practice; it simply indicates that the academy should not be considered a privileged space, with wide dominance and practice excellence. Nevertheless, the information environment of organizations can learn and benefit from the experience of the scientific academy. From the analyses of the research sample, a set of procedures has been developed, which allowed organizational analysts and researchers to elaborate a complete and accurate analysis of the input sources to be declared in organizational or scientific communication.

  6. Point interactions of the dipole type defined through a three-parametric power regularization

    International Nuclear Information System (INIS)

    Zolotaryuk, A V

    2010-01-01

    A family of point interactions of the dipole type is studied in one dimension using a regularization by rectangles in the form of a barrier and a well separated by a finite distance. The rectangles and the distance are parametrized by a squeezing parameter ε → 0 with three powers μ, ν and τ describing the squeezing rates for the barrier, the well and the distance, respectively. This parametrization allows us to construct a whole family of point potentials of the dipole type including some other point interactions, such as e.g. δ-potentials. Varying the power τ, it is possible to obtain in the zero-range limit the following two cases: (i) the limiting δ'-potential is opaque (the conventional result obtained earlier by some authors) or (ii) this potential admits a resonant tunneling (the opposite result obtained recently by other authors). The structure of resonances (if any) also depends on a regularizing sequence. The sets of the {μ, ν, τ}-space where a non-zero (resonant or non-resonant) transmission occurs are found. For all these cases in the zero-range limit the transfer matrix is shown to be with real parameters χ and g depending on a regularizing sequence. Those cases when χ ≠ 1 and g ≠ 0 mean that the corresponding δ'-potential is accompanied by an effective δ-potential.

  7. Decay property of regularity-loss type of solutions in elastic solids with voids

    KAUST Repository

    Said-Houari, Belkacem; Messaoudi, Salim A.

    2013-01-01

    In this article, we consider two porous systems of nonclassical thermoelasticity in the whole real line. We discuss the long-time behaviour of the solutions in the presence of a strong damping acting, together with the heat effect, on the elastic equation and establish several decay results. Those decay results are shown to be very slow and of regularity-loss type. Some improvements of the decay rates have also been given, provided that the initial data belong to some weighted spaces. © 2013 Copyright Taylor and Francis Group, LLC.

  8. Decay property of regularity-loss type of solutions in elastic solids with voids

    KAUST Repository

    Said-Houari, Belkacem

    2013-12-01

    In this article, we consider two porous systems of nonclassical thermoelasticity in the whole real line. We discuss the long-time behaviour of the solutions in the presence of a strong damping acting, together with the heat effect, on the elastic equation and establish several decay results. Those decay results are shown to be very slow and of regularity-loss type. Some improvements of the decay rates have also been given, provided that the initial data belong to some weighted spaces. © 2013 Copyright Taylor and Francis Group, LLC.

  9. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  10. Stickiness in Hamiltonian systems: From sharply divided to hierarchical phase space

    Science.gov (United States)

    Altmann, Eduardo G.; Motter, Adilson E.; Kantz, Holger

    2006-02-01

    We investigate the dynamics of chaotic trajectories in simple yet physically important Hamiltonian systems with nonhierarchical borders between regular and chaotic regions with positive measures. We show that the stickiness to the border of the regular regions in systems with such a sharply divided phase space occurs through one-parameter families of marginally unstable periodic orbits and is characterized by an exponent γ=2 for the asymptotic power-law decay of the distribution of recurrence times. Generic perturbations lead to systems with hierarchical phase space, where the stickiness is apparently enhanced due to the presence of infinitely many regular islands and Cantori. In this case, we show that the distribution of recurrence times can be composed of a sum of exponentials or a sum of power laws, depending on the relative contribution of the primary and secondary structures of the hierarchy. Numerical verification of our main results are provided for area-preserving maps, mushroom billiards, and the newly defined magnetic mushroom billiards.

  11. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  12. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  13. Analysis in Banach spaces

    CERN Document Server

    Hytönen, Tuomas; Veraar, Mark; Weis, Lutz

    The present volume develops the theory of integration in Banach spaces, martingales and UMD spaces, and culminates in a treatment of the Hilbert transform, Littlewood-Paley theory and the vector-valued Mihlin multiplier theorem. Over the past fifteen years, motivated by regularity problems in evolution equations, there has been tremendous progress in the analysis of Banach space-valued functions and processes. The contents of this extensive and powerful toolbox have been mostly scattered around in research papers and lecture notes. Collecting this diverse body of material into a unified and accessible presentation fills a gap in the existing literature. The principal audience that we have in mind consists of researchers who need and use Analysis in Banach Spaces as a tool for studying problems in partial differential equations, harmonic analysis, and stochastic analysis. Self-contained and offering complete proofs, this work is accessible to graduate students and researchers with a background in functional an...

  14. Ombud's corner: space invaders

    CERN Multimedia

    Sudeshna Datta-Cockerill

    2015-01-01

    When normal communication breaks down and there is no sharing anymore, office-mates can become ‘space invaders’. Very often, the situation can be resolved effectively by taking just a few simple steps...   The lack of office space at CERN is a permanent issue that the various departments regularly have to address. As a result, very often this precious space where we spend the entire day has to be shared with other colleagues. Office-mates may come from different backgrounds and cultures and may have very different habits and behaviours; they may also have different activities during the day, sometimes requiring unusual, (perhaps even strange?) interactions with the space they occupy; finally, their presence might be irregular, making it very difficult for us to establish a stable relationship. Mark and Claire share an office as well as some professional activities. In the beginning, the relationship seems to work normally but, over time, the communication between them ste...

  15. Space-Time Discrete KPZ Equation

    Science.gov (United States)

    Cannizzaro, G.; Matetski, K.

    2018-03-01

    We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.

  16. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  17. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  18. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  19. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  20. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  1. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  2. CMOS single-stage input-powered bridge rectifier with boost switch and duty cycle control

    Science.gov (United States)

    Radzuan, Roskhatijah; Mohd Salleh, Mohd Khairul; Hamzah, Mustafar Kamal; Ab Wahab, Norfishah

    2017-06-01

    This paper presents a single-stage input-powered bridge rectifier with boost switch for wireless-powered devices such as biomedical implants and wireless sensor nodes. Realised using CMOS process technology, it employs a duty cycle switch control to achieve high output voltage using boost technique, leading to a high output power conversion. It has only six external connections with the boost inductance. The input frequency of the bridge rectifier is set at 50 Hz, while the switching frequency is 100 kHz. The proposed circuit is fabricated on a single 0.18-micron CMOS die with a space area of 0.024 mm2. The simulated and measured results show good agreement.

  3. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2009-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  4. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2010-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  5. Regularity for a clamped grid equation $u_{xxxx}+u_{yyyy}=f $ on a domain with a corner

    Directory of Open Access Journals (Sweden)

    Tymofiy Gerasimov

    2009-04-01

    Full Text Available The operator $L=frac{partial ^{4}}{partial x^{4}} +frac{partial ^{4}}{partial y^{4}}$ appears in a model for the vertical displacement of a two-dimensional grid that consists of two perpendicular sets of elastic fibers or rods. We are interested in the behaviour of such a grid that is clamped at the boundary and more specifically near a corner of the domain. Kondratiev supplied the appropriate setting in the sense of Sobolev type spaces tailored to find the optimal regularity. Inspired by the Laplacian and the Bilaplacian models one expect, except maybe for some special angles that the optimal regularity improves when angle decreases. For the homogeneous Dirichlet problem with this special non-isotropic fourth order operator such a result does not hold true. We will show the existence of an interval $( frac{1}{2}pi ,omega _{star }$, $omega _{star }/pi approx 0.528dots$ (in degrees $omega _{star }approx 95.1dots^{circ} $, in which the optimal regularity improves with increasing opening angle.

  6. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  7. Regular-chaos transition of the energy spectrum and electromagnetic transition intensities in 44V nucleus using the framework of the nuclear shell model

    International Nuclear Information System (INIS)

    Hamoudi, A.K.; Abdul Majeed Al-Rahmani, A.

    2012-01-01

    The spectral fluctuations and the statistics of electromagnetic transition intensities and electromagnetic moments in 44 V nucleus are studied by the framework of the interacting shell model, using the FPD6 as a realistic effective interaction in the isospin formalism for 4 particles move in the fp-model space with a 40 Ca core. To look for a regular-chaos transition in 44 V nucleus, we perform shell model calculations using various interaction strengths β to the off-diagonal matrix elements of the FPD6. The nearest-neighbors level spacing distribution P(s) and the distribution of electromagnetic transition intensities [such as, B(M1) and B(E2) transitions] are found to have a regular dynamic at β=0, a chaotic dynamic at β⩾0.3 and an intermediate situation at 0 3 statistic we have found a regular dynamic at β=0, a chaotic dynamic at β⩾0.4 and an intermediate situation at 0<β<0.4. It is also found that the statistics of the squares of M1 and E2 moments, which are consistent with a Porter-Thomas distribution, have no dependence on the interaction strength β.

  8. Grey-box state-space identification of nonlinear mechanical vibrations

    Science.gov (United States)

    Noël, J. P.; Schoukens, J.

    2018-05-01

    The present paper deals with the identification of nonlinear mechanical vibrations. A grey-box, or semi-physical, nonlinear state-space representation is introduced, expressing the nonlinear basis functions using a limited number of measured output variables. This representation assumes that the observed nonlinearities are localised in physical space, which is a generic case in mechanics. A two-step identification procedure is derived for the grey-box model parameters, integrating nonlinear subspace initialisation and weighted least-squares optimisation. The complete procedure is applied to an electrical circuit mimicking the behaviour of a single-input, single-output (SISO) nonlinear mechanical system and to a single-input, multiple-output (SIMO) geometrically nonlinear beam structure.

  9. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  10. Ad Hoc Physical Hilbert Spaces in Quantum Mechanics

    Czech Academy of Sciences Publication Activity Database

    Fernandez, F. M.; Garcia, J.; Semorádová, Iveta; Znojil, Miloslav

    2015-01-01

    Roč. 54, č. 12 (2015), s. 4187-4203 ISSN 0020-7748 Institutional support: RVO:61389005 Keywords : quantum mechanics * physical Hilbert spaces * ad hoc inner product * singular potentials regularized * low lying energies Subject RIV: BE - Theoretical Physics Impact factor: 1.041, year: 2015

  11. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  13. Dynamical tunneling in systems with a mixed phase space

    International Nuclear Information System (INIS)

    Loeck, Steffen

    2010-01-01

    Tunneling is one of the most prominent features of quantum mechanics. While the tunneling process in one-dimensional integrable systems is well understood, its quantitative prediction for systems with a mixed phase space is a long-standing open challenge. In such systems regions of regular and chaotic dynamics coexist in phase space, which are classically separated but quantum mechanically coupled by the process of dynamical tunneling. We derive a prediction of dynamical tunneling rates which describe the decay of states localized inside the regular region towards the so-called chaotic sea. This approach uses a fictitious integrable system which mimics the dynamics inside the regular domain and extends it into the chaotic region. Excellent agreement with numerical data is found for kicked systems, billiards, and optical microcavities, if nonlinear resonances are negligible. Semiclassically, however, such nonlinear resonance chains dominate the tunneling process. Hence, we combine our approach with an improved resonance-assisted tunneling theory and derive a unified prediction which is valid from the quantum to the semiclassical regime. We obtain results which show a drastically improved accuracy of several orders of magnitude compared to previous studies. (orig.)

  14. Dynamical tunneling in systems with a mixed phase space

    Energy Technology Data Exchange (ETDEWEB)

    Loeck, Steffen

    2010-04-22

    Tunneling is one of the most prominent features of quantum mechanics. While the tunneling process in one-dimensional integrable systems is well understood, its quantitative prediction for systems with a mixed phase space is a long-standing open challenge. In such systems regions of regular and chaotic dynamics coexist in phase space, which are classically separated but quantum mechanically coupled by the process of dynamical tunneling. We derive a prediction of dynamical tunneling rates which describe the decay of states localized inside the regular region towards the so-called chaotic sea. This approach uses a fictitious integrable system which mimics the dynamics inside the regular domain and extends it into the chaotic region. Excellent agreement with numerical data is found for kicked systems, billiards, and optical microcavities, if nonlinear resonances are negligible. Semiclassically, however, such nonlinear resonance chains dominate the tunneling process. Hence, we combine our approach with an improved resonance-assisted tunneling theory and derive a unified prediction which is valid from the quantum to the semiclassical regime. We obtain results which show a drastically improved accuracy of several orders of magnitude compared to previous studies. (orig.)

  15. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  16. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  17. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  18. Michelson interferometer with separated inputs and outputs, double pass, and compensation

    Science.gov (United States)

    Mather, J. C.; Jennings, D. E.

    1985-01-01

    A novel configuration is proposed for a Michelson interferometer spectrometer, which will be insensitive to tilts or displacements, and which employs separated inputs and outputs and double passing for higher resolution. The great advantage of such a compensated design is a relaxation of mechanical tolerances, which is especially beneficial for instruments in hostile environments. The Atmospheric Trace Molecule Spectroscopy project, which must work reliably after being subjected to the vibrations of a Space Shuttle launch, would benefit from the use of such an instrument.

  19. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  20. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  1. Bilinear Regularized Locality Preserving Learning on Riemannian Graph for Motor Imagery BCI.

    Science.gov (United States)

    Xie, Xiaofeng; Yu, Zhu Liang; Gu, Zhenghui; Zhang, Jun; Cen, Ling; Li, Yuanqing

    2018-03-01

    In off-line training of motor imagery-based brain-computer interfaces (BCIs), to enhance the generalization performance of the learned classifier, the local information contained in test data could be used to improve the performance of motor imagery as well. Further considering that the covariance matrices of electroencephalogram (EEG) signal lie on Riemannian manifold, in this paper, we construct a Riemannian graph to incorporate the information of training and test data into processing. The adjacency and weight in Riemannian graph are determined by the geodesic distance of Riemannian manifold. Then, a new graph embedding algorithm, called bilinear regularized locality preserving (BRLP), is derived upon the Riemannian graph for addressing the problems of high dimensionality frequently arising in BCIs. With a proposed regularization term encoding prior information of EEG channels, the BRLP could obtain more robust performance. Finally, an efficient classification algorithm based on extreme learning machine is proposed to perform on the tangent space of learned embedding. Experimental evaluations on the BCI competition and in-house data sets reveal that the proposed algorithms could obtain significantly higher performance than many competition algorithms after using same filter process.

  2. Regularity, variability and bi-stability in the activity of cerebellar purkinje cells.

    Science.gov (United States)

    Rokni, Dan; Tal, Zohar; Byk, Hananel; Yarom, Yosef

    2009-01-01

    Recent studies have demonstrated that the membrane potential of Purkinje cells is bi-stable and that this phenomenon underlies bi-modal simple spike firing. Membrane potential alternates between a depolarized state, that is associated with spontaneous simple spike firing (up state), and a quiescent hyperpolarized state (down state). A controversy has emerged regarding the relevance of bi-stability to the awake animal, yet recordings made from behaving cat Purkinje cells have demonstrated that at least 50% of the cells exhibit bi-modal firing. The robustness of the phenomenon in vitro or in anaesthetized systems on the one hand, and the controversy regarding its expression in behaving animals on the other hand suggest that state transitions are under neuronal control. Indeed, we have recently demonstrated that synaptic inputs can induce transitions between the states and suggested that the role of granule cell input is to control the states of Purkinje cells rather than increase or decrease firing rate gradually. We have also shown that the state of a Purkinje cell does not only affect its firing but also the waveform of climbing fiber-driven complex spikes and the associated calcium influx. These findings call for a reconsideration of the role of Purkinje cells in cerebellar function. In this manuscript we review the recent findings on Purkinje cell bi-stability and add some analyses of its effect on the regularity and variability of Purkinje cell activity.

  3. Regularity, variabilty and bi-stability in the activity of cerebellar Purkinje cells

    Directory of Open Access Journals (Sweden)

    Dan Rokni

    2009-11-01

    Full Text Available Recent studies have demonstrated that the membrane potential of Purkinje cells is bi-stable and that this phenomenon underlies bi-modal simple spike firing. Membrane potential alternates between a depolarized state, that is associated with spontaneous simple spike firing (up state, and a quiescent hyperpolarized state (down state. A controversy has emerged regarding the relevance of bi-stability to the awake animal, yet recordings made from behaving cat Purkinje cells have demonstrated that at least 50% of the cells exhibit bi-modal firing. The robustness of the phenomenon in-vitro or in anaesthetized systems on the one hand, and the controversy regarding its expression in behaving animals on the other hand suggest that state transitions are under neuronal control. Indeed, we have recently demonstrated that synaptic inputs can induce transitions between the states and suggested that the role of granule cell input is to control the states of Purkinje cells rather than increase or decrease firing rate gradually. We have also shown that the state of a Purkinje cell does not only affect its firing but also the waveform of climbing fiber-driven complex spikes and the associated calcium influx. These findings call for a reconsideration of the role of Purkinje cells in cerebellar function. In this manuscript we review the recent findings on Purkinje cell bi-stability and add some analyses of its effect on the regularity and variability of Purkinje cell activity.

  4. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  5. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  6. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  7. Loop-space quantum formulation of free electromagnetism

    International Nuclear Information System (INIS)

    Di Bartolo, C.; Nori, F.; Gambini, R.; Trias, A.

    1983-01-01

    A procedure for direct quantization of free electromagnetism in the loop-space is proposed. Explicit solutions for the loop-dependent vacuum and the Wilson loop-average are given. It is shown that elementary lines of magnetic field appear as extremals in the vacuum state as a result of the regularization procedure

  8. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  9. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  10. Systematic implementation of implicit regularization for multi-loop Feynman Diagrams

    International Nuclear Information System (INIS)

    Cherchiglia, Adriano Lana; Sampaio, Marcos; Nemes, Maria Carolina

    2011-01-01

    Full text: Implicit Regularization (IR) is a candidate to become an invariant framework in momentum space to perform Feynman diagram calculations to arbitrary loop order. The essence of the method is to write the divergences in terms of loop integrals in one internal momentum which do not need to be explicitly evaluated. Moreover it acts in the physical dimension of the theory and gauge invariance is controlled by regularization dependent surface terms which when set to zero define a constrained version of IR (CIR) and deliver gauge invariant amplitudes automatically. Therefore it is in principle applicable to all physical relevant quantum field theories, supersymmetric gauge theories included. A non trivial question is whether we can generalize this program to arbitrary loop order in consonance with locality, unitarity and Lorentz invariance, especially when overlapping divergences occur. In this work we present a systematic implementation of our method that automatically displays the terms to be subtracted by Bogoliubov's recursion formula. Therefore, we achieve a twofold objective: we show that the IR program respects unitarity, locality and Lorentz invariance and we show that our method is consistent since we are able to display the divergent content of a multi-loop amplitude in a well defined set of basic divergent integrals in one internal momentum. We present several examples (from 1-loop to n-loops) using scalar φ 6 3 theory in order to help the reader understand and visualize the essence of the IR program. The choice of a scalar theory does not reduce the generality of the method presented since all other physical theories can be treated within the same strategy after space time and internal algebra are performed. Another result of this contribution is to show that if the surface terms are not set to zero they will contaminate the renormalization group coefficients. Thus, we are forced to adopt CIR which is equivalent to demand momentum routing invariance

  11. Input filter compensation for switching regulators

    Science.gov (United States)

    Lee, F. C.; Kelkar, S. S.

    1982-01-01

    The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.

  12. Space - A unique environment for process modeling R&D

    Science.gov (United States)

    Overfelt, Tony

    1991-01-01

    Process modeling, the application of advanced computational techniques to simulate real processes as they occur in regular use, e.g., welding, casting and semiconductor crystal growth, is discussed. Using the low-gravity environment of space will accelerate the technical validation of the procedures and enable extremely accurate determinations of the many necessary thermophysical properties. Attention is given to NASA's centers for the commercial development of space; joint ventures of universities, industries, and goverment agencies to study the unique attributes of space that offer potential for applied R&D and eventual commercial exploitation.

  13. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  14. Phaseless tomographic inverse scattering in Banach spaces

    International Nuclear Information System (INIS)

    Estatico, C.; Fedeli, A.; Pastorino, M.; Randazzo, A.; Tavanti, E.

    2016-01-01

    In conventional microwave imaging, a hidden dielectric object under test is illuminated by microwave incident waves and the field it scatters is measured in magnitude and phase in order to retrieve the dielectric properties by solving the related non-homogenous Helmholtz equation or its Lippmann-Schwinger integral formulation. Since the measurement of the phase of electromagnetic waves can be still considered expensive in real applications, in this paper only the magnitude of the scattering wave fields is measured in order to allow a reduction of the cost of the measurement apparatus. In this respect, we firstly analyse the properties of the phaseless scattering nonlinear forward modelling operator in its integral form and we provide an analytical expression for computing its Fréchet derivative. Then, we propose an inexact Newton method to solve the associated nonlinear inverse problems, where any linearized step is solved by a L p Banach space iterative regularization method which acts on the dual space L p* . Indeed, it is well known that regularization in special Banach spaces, such us L p with 1 < p < 2, allows to promote sparsity and to reduce Gibbs phenomena and over-smoothness. Preliminary results concerning numerically computed field data are shown. (paper)

  15. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  16. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  17. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational 1-Norm Regularization in the Derivative Domain

    Science.gov (United States)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2013-01-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  18. Reabsorption kinetics of albumin from pleural space of dogs

    International Nuclear Information System (INIS)

    Miniati, M.; Parker, J.C.; Pistolesi, M.; Cartledge, J.T.; Martin, D.J.; Giuntini, C.; Taylor, A.E.

    1988-01-01

    The reabsorption of albumin from the pleural space was measured in eight dogs receiving 0.5 ml intrapleural injection of 131 I-labeled albumin and a simultaneous intravenous injection of 125 I-labeled albumin. Plasma curves for both tracers were obtained over 24 h. The 125 I-albumin curve served as input function of albumin for interstitial spaces, including pleura, whereas the 131 I-albumin curve represented the output function from pleural space. The frequency function of albumin transit times from pleural space to plasma was obtained by deconvolution of input-output plasma curves. Plasma recovery of 131 I-albumin was complete by 24 h, and the mean transit time from pleura to plasma averaged 7.95 +/- 1.57 (SD) h. Albumin reabsorption occurred mainly via lymphatics as indicated by experiments in 16 additional dogs in which their right lymph ducts or thoracic ducts were ligated before intrapleural injection. A pleural lymph flow of 0.020 +/- 0.003 (SD) ml.kg-1.h-1 was estimated, which is balanced by a comparable filtration of fluid into the pleural space. This suggests that, under physiological conditions, the subpleural lymphatics represent an important control mechanism of pleural liquid pressure

  19. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  20. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums...

  1. Finite Metric Spaces of Strictly Negative Type

    DEFF Research Database (Denmark)

    Hjorth, Poul; Lisonek, P.; Markvorsen, Steen

    1998-01-01

    of Euclidean spaces. We prove that, if the distance matrix is both hypermetric and regular, then it is of strictly negative type. We show that the strictly negative type finite subspaces of spheres are precisely those which do not contain two pairs of antipodal points. In connection with an open problem raised...

  2. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  3. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  4. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.

    2014-01-19

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  5. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon

    2014-01-01

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  6. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2017-03-01

    Full Text Available With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL combined with Hypothesize and Test (HAT. The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International

  7. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.

    Science.gov (United States)

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-03-19

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for

  8. On the choice of the adjusting mouth electrode profile at the input of a linear accelerator with space-uniform quadrupole focusing

    International Nuclear Information System (INIS)

    Balabin, A.I.; Kapchinskij, I.M.; Lipkin, I.M.

    1983-01-01

    Beam matching of an electrostatical injector with a linear accelerator is an important problem, since acceptance at the inlet of a linac with space-homogeneous quadrupole focusing (SHQF) does not remain constant but rotates with the frequency of an accelerating field. A possibility of transverse stationary beam matching with the SHQF at the inlet of the linac can be ensured to a considerable extent by means of an initial matching section (matching mouth) at the length of which focusing hardness varies according to a certain law. In this case the purpose of creating beam matching conditions practically independent of the phases of particles at the inlet into the mouth is attained. Transverse beam matching for different laws of focusing hardness variation along the non-modulated mouth is investigated. It is shown that for earlier suggested laws of hardness variation the matching conditions at the mouth inlet are critically dependent on the input phases of particles at high phase densities of the beam current j. Laws of hardness variation ensuring actual matching condition independence of the phases of particles up to j=2 A/cmxmrad (for protons) are suggested. A case of beam matching by means of the modulated mouth is also considered. Recommendations on mouth modulation laws are given

  9. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  10. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  11. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input in...

  12. Input description for BIOPATH

    International Nuclear Information System (INIS)

    Marklund, J.E.; Bergstroem, U.; Edlund, O.

    1980-01-01

    The computer program BIOPATH describes the flow of radioactivity within a given ecosystem after a postulated release of radioactive material and the resulting dose for specified population groups. The present report accounts for the input data necessary to run BIOPATH. The report also contains descriptions of possible control cards and an input example as well as a short summary of the basic theory.(author)

  13. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  14. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  15. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  16. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  17. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  18. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  19. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  20. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    Science.gov (United States)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards

  1. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  2. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  3. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  4. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  5. The Analysis of Two-Way Functional Data Using Two-Way Regularized Singular Value Decompositions

    KAUST Repository

    Huang, Jianhua Z.

    2009-12-01

    Two-way functional data consist of a data matrix whose row and column domains are both structured, for example, temporally or spatially, as when the data are time series collected at different locations in space. We extend one-way functional principal component analysis (PCA) to two-way functional data by introducing regularization of both left and right singular vectors in the singular value decomposition (SVD) of the data matrix. We focus on a penalization approach and solve the nontrivial problem of constructing proper two-way penalties from oneway regression penalties. We introduce conditional cross-validated smoothing parameter selection whereby left-singular vectors are cross- validated conditional on right-singular vectors, and vice versa. The concept can be realized as part of an alternating optimization algorithm. In addition to the penalization approach, we briefly consider two-way regularization with basis expansion. The proposed methods are illustrated with one simulated and two real data examples. Supplemental materials available online show that several "natural" approaches to penalized SVDs are flawed and explain why so. © 2009 American Statistical Association.

  6. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  7. Characterizing the Input-Output Function of the Olfactory-Limbic Pathway in the Guinea Pig

    Directory of Open Access Journals (Sweden)

    Gian Luca Breschi

    2015-01-01

    Full Text Available Nowadays the neuroscientific community is taking more and more advantage of the continuous interaction between engineers and computational neuroscientists in order to develop neuroprostheses aimed at replacing damaged brain areas with artificial devices. To this end, a technological effort is required to develop neural network models which can be fed with the recorded electrophysiological patterns to yield the correct brain stimulation to recover the desired functions. In this paper we present a machine learning approach to derive the input-output function of the olfactory-limbic pathway in the in vitro whole brain of guinea pig, less complex and more controllable than an in vivo system. We first experimentally characterized the neuronal pathway by delivering different sets of electrical stimuli from the lateral olfactory tract (LOT and by recording the corresponding responses in the lateral entorhinal cortex (l-ERC. As a second step, we used information theory to evaluate how much information output features carry about the input. Finally we used the acquired data to learn the LOT-l-ERC “I/O function,” by means of the kernel regularized least squares method, able to predict l-ERC responses on the basis of LOT stimulation features. Our modeling approach can be further exploited for brain prostheses applications.

  8. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  9. Wave energy input into the Ekman layer

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper is concerned with the wave energy input into the Ekman layer, based on 3 observational facts that surface waves could significantly affect the profile of the Ekman layer. Under the assumption of constant vertical diffusivity, the analytical form of wave energy input into the Ekman layer is derived. Analysis of the energy balance shows that the energy input to the Ekman layer through the wind stress and the interaction of the Stokes-drift with planetary vorticity can be divided into two kinds. One is the wind energy input, and the other is the wave energy input which is dependent on wind speed, wave characteristics and the wind direction relative to the wave direction. Estimates of wave energy input show that wave energy input can be up to 10% in high-latitude and high-wind speed areas and higher than 20% in the Antarctic Circumpolar Current, compared with the wind energy input into the classical Ekman layer. Results of this paper are of significance to the study of wave-induced large scale effects.

  10. Estimation of the synaptic input firing rates and characterization of the stimulation effects in an auditory neuron

    Czech Academy of Sciences Publication Activity Database

    Kobayashi, R.; He, J.; Lánský, Petr

    2015-01-01

    Roč. 9, May 18 (2015), s. 59 ISSN 1662-5188 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : synaptic inputs * statistical inference * state-space models * intracellular recordings * auditory cortex Subject RIV: BD - Theory of Information Impact factor: 2.653, year: 2015

  11. Hydrogen atom in the phase-space formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Gracia-Bondia, J.M.

    1984-01-01

    Using a coordinate transformation which regularizes the classical Kepler problem, we show that the hydrogen-atom case may be analytically solved via the phase-space formulation of nonrelativistic quantum mechanics. The problem is essentially reduced to that of a four-dimensional oscillator whose treatment in the phase-space formulation is developed. Furthermore, the method allows us to calculate the Green's function for the H atom in a surprisingly simple way

  12. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  13. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  14. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    Science.gov (United States)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  15. Alternation of regular and chaotic dynamics in a simple two-degree-of-freedom system with nonlinear inertial coupling.

    Science.gov (United States)

    Sigalov, G; Gendelman, O V; AL-Shudeifat, M A; Manevitch, L I; Vakakis, A F; Bergman, L A

    2012-03-01

    We show that nonlinear inertial coupling between a linear oscillator and an eccentric rotator can lead to very interesting interchanges between regular and chaotic dynamical behavior. Indeed, we show that this model demonstrates rather unusual behavior from the viewpoint of nonlinear dynamics. Specifically, at a discrete set of values of the total energy, the Hamiltonian system exhibits non-conventional nonlinear normal modes, whose shape is determined by phase locking of rotatory and oscillatory motions of the rotator at integer ratios of characteristic frequencies. Considering the weakly damped system, resonance capture of the dynamics into the vicinity of these modes brings about regular motion of the system. For energy levels far from these discrete values, the motion of the system is chaotic. Thus, the succession of resonance captures and escapes by a discrete set of the normal modes causes a sequence of transitions between regular and chaotic behavior, provided that the damping is sufficiently small. We begin from the Hamiltonian system and present a series of Poincaré sections manifesting the complex structure of the phase space of the considered system with inertial nonlinear coupling. Then an approximate analytical description is presented for the non-conventional nonlinear normal modes. We confirm the analytical results by numerical simulation and demonstrate the alternate transitions between regular and chaotic dynamics mentioned above. The origin of the chaotic behavior is also discussed.

  16. Limit Formulae and Jump Relations of Potential Theory in Sobolev Spaces

    OpenAIRE

    Raskop, Thomas; Grothaus, Martin

    2009-01-01

    In this article we combine the modern theory of Sobolev spaces with the classical theory of limit formulae and jump relations of potential theory. Also other authors proved the convergence in Lebesgue spaces for integrable functions. The achievement of this paper is the L2 convergence for the weak derivatives of higher orders. Also the layer functions F are elements of Sobolev spaces and a two dimensional suitable smooth submanifold in R3, called regular Cm-surface. We are considering the pot...

  17. Regularized principal covariates regression and its application to finding coupled patterns in climate fields

    Science.gov (United States)

    Fischer, M. J.

    2014-02-01

    There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.

  18. Drug-Target Interaction Prediction with Graph Regularized Matrix Factorization.

    Science.gov (United States)

    Ezzat, Ali; Zhao, Peilin; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-01-01

    Experimental determination of drug-target interactions is expensive and time-consuming. Therefore, there is a continuous demand for more accurate predictions of interactions using computational techniques. Algorithms have been devised to infer novel interactions on a global scale where the input to these algorithms is a drug-target network (i.e., a bipartite graph where edges connect pairs of drugs and targets that are known to interact). However, these algorithms had difficulty predicting interactions involving new drugs or targets for which there are no known interactions (i.e., "orphan" nodes in the network). Since data usually lie on or near to low-dimensional non-linear manifolds, we propose two matrix factorization methods that use graph regularization in order to learn such manifolds. In addition, considering that many of the non-occurring edges in the network are actually unknown or missing cases, we developed a preprocessing step to enhance predictions in the "new drug" and "new target" cases by adding edges with intermediate interaction likelihood scores. In our cross validation experiments, our methods achieved better results than three other state-of-the-art methods in most cases. Finally, we simulated some "new drug" and "new target" cases and found that GRMF predicted the left-out interactions reasonably well.

  19. IT Challenges for Space Medicine

    Science.gov (United States)

    Johnson-Throop, Kathy

    2010-01-01

    This viewgraph presentation reviews the various Information Technology challenges for aerospace medicine. The contents include: 1) Space Medicine Activities; 2) Private Medical Information; 3) Lifetime Surveillance of Astronaut Health; 4) Mission Medical Support; 5) Data Repositories for Research; 6) Data Input and Output; 7) Finding Data/Information; 8) Summary of Challenges; and 9) Solutions and questions.

  20. State Estimation of International Space Station Centrifuge Rotor With Incomplete Knowledge of Disturbance Inputs

    National Research Council Canada - National Science Library

    Sullivan, Michael J

    2005-01-01

    This thesis develops a state estimation algorithm for the Centrifuge Rotor (CR) system where only relative measurements are available with limited knowledge of both rotor imbalance disturbances and International Space Station (ISS...

  1. Compressing the hidden variable space of a qubit

    International Nuclear Information System (INIS)

    Montina, Alberto

    2011-01-01

    In previously exhibited hidden variable models of quantum state preparation and measurement, the number of continuous hidden variables describing the actual state of single realizations is never smaller than the quantum state manifold dimension. We introduce a simple model for a qubit whose hidden variable space is one-dimensional, i.e., smaller than the two-dimensional Bloch sphere. The hidden variable probability distributions associated with quantum states satisfy reasonable criteria of regularity. Possible generalizations of this shrinking to an N-dimensional Hilbert space are discussed.

  2. Regularities of magnetic field penetration into half-space in type-II superconductors

    International Nuclear Information System (INIS)

    Medvedev, Yu.V.; Krasnyuk, I.B.

    2003-01-01

    The equations, modeling the distributions of the magnetic field induction and current density in the half-space with an account of the exponential volt-ampere characteristics, are obtained. The velocity of the magnetization front propagation by the assigned average rate of the change by the time of the external magnetic field at the sample boundary is determined. The integral condition for the electric resistance, nonlinearly dependent on the magnetic field, by accomplishing whereof the magnetic flux penetrates into the sample with the finite velocity is indicated. The analytical representation of the equation with the exponential boundary mode, which models the change in the magnetic field at the area boundary, is pointed out [ru

  3. A Galerkin Finite Element Method for Numerical Solutions of the Modified Regularized Long Wave Equation

    Directory of Open Access Journals (Sweden)

    Liquan Mei

    2014-01-01

    Full Text Available A Galerkin method for a modified regularized long wave equation is studied using finite elements in space, the Crank-Nicolson scheme, and the Runge-Kutta scheme in time. In addition, an extrapolation technique is used to transform a nonlinear system into a linear system in order to improve the time accuracy of this method. A Fourier stability analysis for the method is shown to be marginally stable. Three invariants of motion are investigated. Numerical experiments are presented to check the theoretical study of this method.

  4. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  5. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational ℓ1-Norm Regularization in the Derivative Domain

    Science.gov (United States)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2014-05-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  6. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    Science.gov (United States)

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  7. Classical and quantum investigations of four-dimensional maps with a mixed phase space

    International Nuclear Information System (INIS)

    Richter, Martin

    2012-01-01

    Systems with more than two degrees of freedom are of fundamental importance for the understanding of problems ranging from celestial mechanics to molecules. Due to the dimensionality the classical phase-space structure of such systems is more difficult to understand than for systems with two or fewer degrees of freedom. This thesis aims for a better insight into the classical as well as the quantum mechanics of 4D mappings representing driven systems with two degrees of freedom. In order to analyze such systems, we introduce 3D sections through the 4D phase space which reveal the regular and chaotic structures. We introduce these concepts by means of three example mappings of increasing complexity. After a classical analysis the systems are investigated quantum mechanically. We focus especially on two important aspects: First, we address quantum mechanical consequences of the classical Arnold web and demonstrate how quantum mechanics can resolve this web in the semiclassical limit. Second, we investigate the quantum mechanical tunneling couplings between regular and chaotic regions in phase space. We determine regular-to-chaotic tunneling rates numerically and extend the fictitious integrable system approach to higher dimensions for their prediction. Finally, we study resonance-assisted tunneling in 4D maps.

  8. Gene selection for microarray data classification via subspace learning and manifold regularization.

    Science.gov (United States)

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  9. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  10. Investigations of Escherichia coli promoter sequences with artificial neural networks: new signals discovered upstream of the transcriptional startpoint

    DEFF Research Database (Denmark)

    Pedersen, Anders Gorm; Engelbrecht, Jacob

    1995-01-01

    We present a novel method for using the learning ability of a neural network as a measure of information in local regions of input data. Using the method to analyze Escherichia coli promoters, we discover all previously described signals, and furthermore find new signals that are regularly spaced...

  11. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  12. The Importance of Input and Interaction in SLA

    Institute of Scientific and Technical Information of China (English)

    党春花

    2009-01-01

    As is known to us, input and interaction play the crucial roles in second language acquisition (SLA). Different linguistic schools have different explanations to input and interaction Behaviorist theories hold a view that input is composed of stimuli and response, putting more emphasis on the importance of input, while mentalist theories find input is a necessary condition to SLA, not a sufficient condition. At present, social interaction theories, which is one type of cognitive linguistics, suggests that besides input, interaction is also essential to language acquisition. Then, this essay will discuss how input and interaction result in SLA.

  13. Testing and documentation of programs used to transform climatological precipitation data to a geographically gridded format

    International Nuclear Information System (INIS)

    Fox, T.D.

    1979-01-01

    A procedure was developed for converting climatological hourly precipitation data into a form suitable for input to regional atmospheric transport and removal models. The procedure involves a rearrangement of the original data by date rather than by station, followed by the use of a spatial averaging scheme to interpolate data from randomly spaced stations to a regularly spaced grid. The procedure has been tested and documented for general use

  14. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  15. Space Interferometry Science Working Group

    Science.gov (United States)

    Ridgway, Stephen T.

    1992-12-01

    Decisions taken by the astronomy and astrophysics survey committee and the interferometry panel which lead to the formation of the Space Interferometry Science Working Group (SISWG) are outlined. The SISWG was formed by the NASA astrophysics division to provide scientific and technical input from the community in planning for space interferometry and in support of an Astrometric Interferometry Mission (AIM). The AIM program hopes to measure the positions of astronomical objects with a precision of a few millionths of an arcsecond. The SISWG science and technical teams are described and the outcomes of its first meeting are given.

  16. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  17. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  18. Stability of the Regular Hayward Thin-Shell Wormholes

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2016-01-01

    Full Text Available The aim of this paper is to construct regular Hayward thin-shell wormholes and analyze their stability. We adopt Israel formalism to calculate surface stresses of the shell and check the null and weak energy conditions for the constructed wormholes. It is found that the stress-energy tensor components violate the null and weak energy conditions leading to the presence of exotic matter at the throat. We analyze the attractive and repulsive characteristics of wormholes corresponding to ar>0 and ar<0, respectively. We also explore stability conditions for the existence of traversable thin-shell wormholes with arbitrarily small amount of fluid describing cosmic expansion. We find that the space-time has nonphysical regions which give rise to event horizon for 0

  19. Second order elastic metrics on the shape space of curves

    DEFF Research Database (Denmark)

    Bauer, Martin; Bruveris, Martins; Harms, Philipp

    2015-01-01

    Second order Sobolev metrics on the space of regular unparametrized planar curves have several desirable completeness properties not present in lower order metrics, but numerics are still largely missing. In this paper, we present algorithms to numerically solve the initial and boundary value......, due to its generality, it could be applied to more general spaces of mapping. We demonstrate the effectiveness of our approach by analyzing a collection of shapes representing physical objects....

  20. Application of a regularized model inversion system (REGFLEC) to multi-temporal RapidEye imagery for retrieving vegetation characteristics

    KAUST Repository

    Houborg, Rasmus

    2015-10-14

    Accurate retrieval of canopy biophysical and leaf biochemical constituents from space observations is critical to diagnosing the functioning and condition of vegetation canopies across spatio-temporal scales. Retrieved vegetation characteristics may serve as important inputs to precision farming applications and as constraints in spatially and temporally distributed model simulations of water and carbon exchange processes. However significant challenges remain in the translation of composite remote sensing signals into useful biochemical, physiological or structural quantities and treatment of confounding factors in spectrum-trait relations. Bands in the red-edge spectrum have particular potential for improving the robustness of retrieved vegetation properties. The development of observationally based vegetation retrieval capacities, effectively constrained by the enhanced information content afforded by bands in the red-edge, is a needed investment towards optimizing the benefit of current and future satellite sensor systems. In this study, a REGularized canopy reFLECtance model (REGFLEC) for joint leaf chlorophyll (Chll) and leaf area index (LAI) retrieval is extended to sensor systems with a band in the red-edge region for the first time. Application to time-series of 5 m resolution multi-spectral RapidEye data is demonstrated over an irrigated agricultural region in central Saudi Arabia, showcasing the value of satellite-derived crop information at this fine scale for precision management. Validation against in-situ measurements in fields of alfalfa, Rhodes grass, carrot and maize indicate improved accuracy of retrieved vegetation properties when exploiting red-edge information in the model inversion process. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).