WorldWideScience

Sample records for variation regularization parameter

  1. Parameter choice in Banach space regularization under variational inequalities

    International Nuclear Information System (INIS)

    Hofmann, Bernd; Mathé, Peter

    2012-01-01

    The authors study parameter choice strategies for the Tikhonov regularization of nonlinear ill-posed problems in Banach spaces. The effectiveness of any parameter choice for obtaining convergence rates depends on the interplay of the solution smoothness and the nonlinearity structure, and it can be expressed concisely in terms of variational inequalities. Such inequalities are link conditions between the penalty term, the norm misfit and the corresponding error measure. The parameter choices under consideration include an a priori choice, the discrepancy principle as well as the Lepskii principle. For the convenience of the reader, the authors review in an appendix a few instances where the validity of a variational inequality can be established. (paper)

  2. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  3. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  4. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  5. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  6. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  7. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  8. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    Science.gov (United States)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  9. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  10. Regularization of Nonmonotone Variational Inequalities

    International Nuclear Information System (INIS)

    Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.

    2006-01-01

    In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems

  11. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  12. Variational regularization of 3D data experiments with Matlab

    CERN Document Server

    Montegranario, Hebert

    2014-01-01

    Variational Regularization of 3D Data provides an introduction to variational methods for data modelling and its application in computer vision. In this book, the authors identify interpolation as an inverse problem that can be solved by Tikhonov regularization. The proposed solutions are generalizations of one-dimensional splines, applicable to n-dimensional data and the central idea is that these splines can be obtained by regularization theory using a trade-off between the fidelity of the data and smoothness properties.As a foundation, the authors present a comprehensive guide to the necessary fundamentals of functional analysis and variational calculus, as well as splines. The implementation and numerical experiments are illustrated using MATLAB®. The book also includes the necessary theoretical background for approximation methods and some details of the computer implementation of the algorithms. A working knowledge of multivariable calculus and basic vector and matrix methods should serve as an adequat...

  13. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  14. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  15. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  16. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  17. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  18. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  19. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  20. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  1. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  2. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  3. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  4. Adaptive discretizations for the choice of a Tikhonov regularization parameter in nonlinear inverse problems

    International Nuclear Information System (INIS)

    Kaltenbacher, Barbara; Kirchner, Alana; Vexler, Boris

    2011-01-01

    Parameter identification problems for partial differential equations usually lead to nonlinear inverse problems. A typical property of such problems is their instability, which requires regularization techniques, like, e.g., Tikhonov regularization. The main focus of this paper will be on efficient methods for determining a suitable regularization parameter by using adaptive finite element discretizations based on goal-oriented error estimators. A well-established method for the determination of a regularization parameter is the discrepancy principle where the residual norm, considered as a function i of the regularization parameter, should equal an appropriate multiple of the noise level. We suggest to solve the resulting scalar nonlinear equation by an inexact Newton method, where in each iteration step, a regularized problem is solved at a different discretization level. The proposed algorithm is an extension of the method suggested in Griesbaum A et al (2008 Inverse Problems 24 025025) for linear inverse problems, where goal-oriented error estimators for i and its derivative are used for adaptive refinement strategies in order to keep the discretization level as coarse as possible to save computational effort but fine enough to guarantee global convergence of the inexact Newton method. This concept leads to a highly efficient method for determining the Tikhonov regularization parameter for nonlinear ill-posed problems. Moreover, we prove that with the so-obtained regularization parameter and an also adaptively discretized Tikhonov minimizer, usual convergence and regularization results from the continuous setting can be recovered. As a matter of fact, it is shown that it suffices to use stationary points of the Tikhonov functional. The efficiency of the proposed method is demonstrated by means of numerical experiments. (paper)

  5. Regularization in global sound equalization based on effort variation

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Sarris, John; Jacobsen, Finn

    2009-01-01

    . Effort variation equalization involves modifying the conventional cost function in sound equalization, which is based on minimizing least-squares reproduction errors, by adding a term that is proportional to the squared deviations between complex source strengths, calculated independently for the sources......Sound equalization in closed spaces can be significantly improved by generating propagating waves that are naturally associated with the geometry, as, for example, plane waves in rectangular enclosures. This paper presents a control approach termed effort variation regularization based on this idea...

  6. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  7. Regularities And Irregularities Of The Stark Parameters For Single Ionized Noble Gases

    Science.gov (United States)

    Peláez, R. J.; Djurovic, S.; Cirišan, M.; Aparicio, J. A.; Mar S.

    2010-07-01

    Spectroscopy of ionized noble gases has a great importance for the laboratory and astrophysical plasmas. Generally, spectra of inert gases are important for many physics areas, for example laser physics, fusion diagnostics, photoelectron spectroscopy, collision physics, astrophysics etc. Stark halfwidths as well as shifts of spectral lines are usually employed for plasma diagnostic purposes. For example atomic data of argon krypton and xenon will be useful for the spectral diagnostic of ITER. In addition, the software used for stellar atmosphere simulation like TMAP, and SMART require a large amount of atomic and spectroscopic data. Availability of these parameters will be useful for a further development of stellar atmosphere and evolution models. Stark parameters data of spectral lines can also be useful for verification of theoretical calculations and investigation of regularities and systematic trends of these parameters within a multiplet, supermultiplet or transition array. In the last years, different trends and regularities of Stark parameters (halwidths and shifts of spectral lines) have been analyzed. The conditions related with atomic structure of the element as well as plasma conditions are responsible for regular or irregular behaviors of the Stark parameters. The absence of very close perturbing levels makes Ne II as a good candidate for analysis of the regularities. Other two considered elements Kr II and Xe II with complex spectra present strong perturbations and in some cases an irregularities in Stark parameters appear. In this work we analyze the influence of the perturbations to Stark parameters within the multiplets.

  8. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...

  9. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  10. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  11. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    International Nuclear Information System (INIS)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-01-01

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise

  12. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-08-01

    In this work a new and promising algorithm based on the minimization of especial functional that depends on two regularization parameters is considered for the identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  13. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-12-01

    In this work a new and promising algorithm based in the minimization of especial functional that depends on two regularization parameters is considered for identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  14. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan

    2013-12-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  15. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan; Strekalovskiy, Evgeny; Koetter, Sabrina; Cremers, Daniel

    2013-01-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  16. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  17. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  18. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  19. Boundary Equations and Regularity Theory for Geometric Variational Systems with Neumann Data

    Science.gov (United States)

    Schikorra, Armin

    2018-02-01

    We study boundary regularity of maps from two-dimensional domains into manifolds which are critical with respect to a generic conformally invariant variational functional and which, at the boundary, intersect perpendicularly with a support manifold. For example, harmonic maps, or H-surfaces, with a partially free boundary condition. In the interior it is known, by the celebrated work of Rivière, that these maps satisfy a system with an antisymmetric potential, from which one can derive the interior regularity of the solution. Avoiding a reflection argument, we show that these maps satisfy along the boundary a system of equations which also exhibits a (nonlocal) antisymmetric potential that combines information from the interior potential and the geometric Neumann boundary condition. We then proceed to show boundary regularity for solutions to such systems.

  20. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  1. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  2. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  3. An extended L-curve method for choosing a regularization parameter in electrical resistance tomography

    International Nuclear Information System (INIS)

    Xu, Yanbin; Pei, Yang; Dong, Feng

    2016-01-01

    The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner–one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography. (paper)

  4. Variation of Parameters in Differential Equations (A Variation in Making Sense of Variation of Parameters)

    Science.gov (United States)

    Quinn, Terry; Rai, Sanjay

    2012-01-01

    The method of variation of parameters can be found in most undergraduate textbooks on differential equations. The method leads to solutions of the non-homogeneous equation of the form y = u[subscript 1]y[subscript 1] + u[subscript 2]y[subscript 2], a sum of function products using solutions to the homogeneous equation y[subscript 1] and…

  5. On a continuation approach in Tikhonov regularization and its application in piecewise-constant parameter identification

    International Nuclear Information System (INIS)

    Melicher, V; Vrábel’, V

    2013-01-01

    We present a new approach to the convexification of the Tikhonov regularization using a continuation method strategy. We embed the original minimization problem into a one-parameter family of minimization problems. Both the penalty term and the minimizer of the Tikhonov functional become dependent on a continuation parameter. In this way we can independently treat two main roles of the regularization term, which are the stabilization of the ill-posed problem and introduction of the a priori knowledge. For zero continuation parameter we solve a relaxed regularization problem, which stabilizes the ill-posed problem in a weaker sense. The problem is recast to the original minimization by the continuation method and so the a priori knowledge is enforced. We apply this approach in the context of topology-to-shape geometry identification, where it allows us to avoid the convergence of gradient-based methods to a local minima. We present illustrative results for magnetic induction tomography which is an example of PDE-constrained inverse problem. (paper)

  6. Total Variation Based Parameter-Free Model for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Andersen, Martin Skovgaard

    2017-01-01

    We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained...... total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i. e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained...... reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities...

  7. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  8. A study on regularization parameter choice in near-field acoustical holography

    DEFF Research Database (Denmark)

    Gomes, Jesper; Hansen, Per Christian

    2008-01-01

    a regularization parameter. These parameter choice methods (PCMs) are attractive, since they require no a priori knowledge about the noise. However, there seems to be no clear understanding of when one PCM is better than the other. This paper presents comparisons of three PCMs: GCV, L-curve and Normalized......), and the Equivalent Source Method (ESM). All combinations of the PCMs and the NAH methods are investigated using simulated measurements with different types of noise added to the input. Finally, the comparisons are carried out for a practical experiment. This aim of this work is to create a better understanding...... of which mechanisms that affect the performance of the different PCMs....

  9. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  10. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    Science.gov (United States)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  11. Estimation of the global regularity of a multifractional Brownian motion

    DEFF Research Database (Denmark)

    Lebovits, Joachim; Podolskij, Mark

    This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....

  12. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  13. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  14. Fluid queues and regular variation

    NARCIS (Netherlands)

    O.J. Boxma (Onno)

    1996-01-01

    textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail

  15. Variational estimation of process parameters in a simplified atmospheric general circulation model

    Science.gov (United States)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  16. Fluid queues and regular variation

    NARCIS (Netherlands)

    Boxma, O.J.

    1996-01-01

    This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even

  17. Asymptotics of decreasing solutions of coupled p-Laplacian systems in the framework of regular variation

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Matucci, S.

    2014-01-01

    Roč. 193, č. 3 (2014), s. 837-858 ISSN 0373-3114 Institutional support: RVO:67985840 Keywords : decreasing solution * quasilinear system * Emden-Fowler system * Lane-Emden system * regular variation Subject RIV: BA - General Mathematics Impact factor: 1.065, year: 2014 http://link.springer.com/article/10.1007%2Fs10231-012-0303-9

  18. Studies on seasonal variation in water quality parameters of Rana Pratap Sagar lake (1996-99)

    International Nuclear Information System (INIS)

    Verma, R.; Rout, D.; Purohit, K.C.

    2000-01-01

    Water- chemistry monitoring identifies the concentration and patterns of fluctuation in chemical constituents. This information is essential to project future trends monitoring in Lake Water chemistry to identify any potential for affecting plant operation through scaling or corrosion of the circulating and service-water system equipment. Regular water chemistry monitoring provides a useful record of past. This record helps in identification of conditions that would impair station operations before their onset, allowing remedial action to be undertaken before plant performance is significantly affected. Preventive action to control the parameters influencing the corrosion, scaling and bio-fouling in the cooling system, in turn, eliminates excessive maintenance and premature replacement that otherwise would result from damage caused by unforeseen changes in the cooling water. This paper highlights the systematic monitoring approach for the variation of chemical parameters influenced by the seasonal changes in a total period of four years. (author)

  19. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  20. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    Science.gov (United States)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  1. VARIATIONS IN ELECTROPHYSICAL PARAMETERS ESTIMATED FROM ELECTROMAGNETIC MONITORING DATA AS AN INDICATOR OF FAULT ACTIVITY

    Directory of Open Access Journals (Sweden)

    A. E. Shalaginov

    2018-01-01

    Full Text Available In the regions of high seismic activity, investigations of fault zones are of paramount importance as such zones can generate seismicity. A top task in the regional studies is determining the rates of activity from the data obtained by geoelectrical methods, especially considering the data on the faults covered by sediments. From a practical standpoint, the results of these studies are important for seismic zoning and forecasting of natural and anthropogenic geodynamic phenomena that may potentially occur in the populated areas and zones allocated for construction of industrial and civil objects, pipelines, roads, bridges, etc. Seismic activity in Gorny Altai is regularly monitored after the destructive 2003 Chuya earthquake (M=7.3 by the non-stationary electromagnetic sounding with galvanic and inductive sources of three modifications. From the long-term measurements that started in 2007 and continue in the present, electrical resistivity and electrical anisotropy are determined. Our study aimed to estimate the variations of these electrophysical parameters in the zone influenced by the fault, consider the intensity of the variations in comparison with seismicity indicators, and attempt at determining the degree of activity of the faults. Based on the results of our research, we propose a technique for measuring and interpreting the data sets obtained by a complex of non-stationary sounding modifications. The technique ensures a more precise evaluation of the electrophysical parameters. It is concluded that the electric anisotropy coefficient can be effectively used to characterize the current seismicity, and its maximum variations, being observed in the zone influenced by the fault, are characteristic of the fault activity. The use of two electrophysical parameters enhances the informativeness of the study.

  2. Arbitrary parameters in implicit regularization and democracy within perturbative description of 2-dimensional gravitational anomalies

    International Nuclear Information System (INIS)

    Souza, Leonardo A.M.; Sampaio, Marcos; Nemes, M.C.

    2006-01-01

    We show that the Implicit Regularization Technique is useful to display quantum symmetry breaking in a complete regularization independent fashion. Arbitrary parameters are expressed by finite differences between integrals of the same superficial degree of divergence whose value is fixed on physical grounds (symmetry requirements or phenomenology). We study Weyl fermions on a classical gravitational background in two dimensions and show that, assuming Lorentz symmetry, the Weyl and Einstein Ward identities reduce to a set of algebraic equations for the arbitrary parameters which allows us to study the Ward identities on equal footing. We conclude in a renormalization independent way that the axial part of the Einstein Ward identity is always violated. Moreover whereas we can preserve the pure tensor part of the Einstein Ward identity at the expense of violating the Weyl Ward identities we may as well violate the former and preserve the latter

  3. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    Science.gov (United States)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  4. A function space framework for structural total variation regularization with applications in inverse problems

    Science.gov (United States)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  5. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  6. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational 1-Norm Regularization in the Derivative Domain

    Science.gov (United States)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2013-01-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  7. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  8. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  9. Variations in the Parameters of Background Seismic Noise during the Preparation Stages of Strong Earthquakes in the Kamchatka Region

    Science.gov (United States)

    Kasimova, V. A.; Kopylova, G. N.; Lyubushin, A. A.

    2018-03-01

    The results of the long (2011-2016) investigation of background seismic noise (BSN) in Kamchatka by the method suggested by Doct. Sci. (Phys.-Math.) A.A. Lyubushin with the use of the data from the network of broadband seismic stations of the Geophysical Survey of the Russian Academy of Sciences are presented. For characterizing the BSN field and its variability, continuous time series of the statistical parameters of the multifractal singularity spectra and wavelet expansion calculated from the records at each station are used. These parameters include the generalized Hurst exponent α*, singularity spectrum support width Δα, wavelet spectral exponent β, minimal normalized entropy of wavelet coefficients En, and spectral measure of their coherent behavior. The peculiarities in the spatiotemporal distribution of the BSN parameters as a probable response to the earthquakes with M w = 6.8-8.3 that occurred in Kamchatka in 2013 and 2016 are considered. It is established that these seismic events were preceded by regular variations in the BSN parameters, which lasted for a few months and consisted in the reduction of the median and mean α*, Δα, and β values estimated over all the stations and in the increase of the En values. Based on the increase in the spectral measure of the coherent behavior of the four-variate time series of the median and mean values of the considered statistics, the effect of the enhancement of the synchronism in the joint (collective) behavior of these parameters during a certain period prior to the mantle earthquake in the Sea of Okhotsk (May 24, 2013, M w = 8.3) is diagnosed. The procedures for revealing the precursory effects in the variations of the BSN parameters are described and the examples of these effects are presented.

  10. Performance analysis of pin fins with temperature dependent thermal parameters using the variation of parameters method

    Directory of Open Access Journals (Sweden)

    Cihat Arslantürk

    2016-08-01

    Full Text Available The performance of pin fins transferring heat by convection and radiation and having variable thermal conductivity, variable emissivity and variable heat transfer coefficient was investigated in the present paper. Nondimensionalizing the fin equation, the problem parameters which affect the fin performance were obtained. Dimensionless nonlinear fin equation was solved with the variation of parameters method, which is quite new in the solution of nonlinear heat transfer problems. The solution of variation of parameters method was compared with known analytical solutions and some numerical solution. The comparisons showed that the solutions are seen to be perfectly compatible. The effects of problem parameters were investigated on the heat transfer rate and fin efficiency and results were presented graphically.

  11. Phase-field modelling of ductile fracture: a variational gradient-extended plasticity-damage theory and its micromorphic regularization.

    Science.gov (United States)

    Miehe, C; Teichtmeister, S; Aldakheel, F

    2016-04-28

    This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic-plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. © 2016 The Author(s).

  12. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  13. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational ℓ1-Norm Regularization in the Derivative Domain

    Science.gov (United States)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2014-05-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  14. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  15. A variational regularization of Abel transform for GPS radio occultation

    Directory of Open Access Journals (Sweden)

    T.-K. Wee

    2018-04-01

    Full Text Available In the Global Positioning System (GPS radio occultation (RO technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the

  16. A variational regularization of Abel transform for GPS radio occultation

    Science.gov (United States)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity

  17. Variational estimates of point-kinetics parameters

    International Nuclear Information System (INIS)

    Favorite, J.A.; Stacey, W.M. Jr.

    1995-01-01

    Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores

  18. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  19. Reactor thermal behaviors under kinetics parameters variations in fast reactivity insertion

    Energy Technology Data Exchange (ETDEWEB)

    Abou-El-Maaty, Talal [Reactors Department, Atomic Energy Authority, Cairo 13759 (Egypt)], E-mail: talal22969@yahoo.com; Abdelhady, Amr [Reactors Department, Atomic Energy Authority, Cairo 13759 (Egypt)

    2009-03-15

    The influences of variations in some of the kinetics parameters affecting the reactivity insertion are considered in this study, it has been accomplished in order to acquire knowledge about the role that kinetic parameters play in prompt critical transients from the safety point of view. The kinetics parameters variations are limited to the effective delayed neutron fraction ({beta}{sub eff}) and the prompt neutron generation time ({lambda}). The reactor thermal behaviors under the variations in effective delayed neutron fraction and prompt neutron generation time included, the reactor power, maximum fuel temperature, maximum clad temperature, maximum coolant temperature and the mass flux variations at the hot channel. The analysis is done for a typical swimming pool, plate type research reactor with low enriched uranium. The scram system is disabled during the accidents simulations. Calculations were done using PARET code. As a result of simulations, it is concluded that, the reactor (ETRR2) thermal behavior is considerably more sensitive to the variation in the effective delayed neutron fraction than to the variation in prompt neutron generation time and the fast reactivity insertion in both cases causes a flow expansion and contraction at the hot channel exit. The amplitude of the oscillated flow is a qualitatively increases with the decrease in both {beta}{sub eff} and {lambda}.

  20. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  1. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  2. Regularities in Low-Temperature Phosphatization of Silicates

    Science.gov (United States)

    Savenko, A. V.

    2018-01-01

    The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.

  3. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  4. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  5. Comparison of clinical parameters and environmental noise levels between regular surgery and piezosurgery for extraction of impacted third molars

    OpenAIRE

    Chang, Hao-Hueng; Lee, Ming-Shu; Hsu, You-Chyun; Tsai, Shang-Jye; Lin, Chun-Pin

    2015-01-01

    Impacted third molars can be extracted by regular surgery or piezosurgery. The aim of this study was to compare clinical parameters and device-produced noise levels between regular surgery and piezosurgery for the extraction of impacted third molars. Methods: Twenty patients (18 women and 2 men, 17–29 years of age) with bilateral symmetrical impacted mandibular or maxillary third molars of the same level were included in this randomized crossover clinical trial. The 40 impacted third molar...

  6. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  7. Numerical simulation of electro-osmotic consolidation coupling non-linear variation of soil parameters

    Science.gov (United States)

    Wu, Hui; Hu, Liming; Wen, Qingbo

    2017-06-01

    Electro-osmotic consolidation is an effective method for soft ground improvement. A main limitation of previous numerical models on this technique is the ignorance of the non-linear variation of soil parameters. In the present study, a multi-field numerical model is developed with the consideration of the non-linear variation of soil parameters during electro-osmotic consolidation process. The numerical simulations on an axisymmetric model indicated that the non-linear variation of soil parameters showed remarkable impact on the development of the excess pore water pressure and degree of consolidation. A field experiment with complex geometry, boundary conditions, electrode configuration and voltage application was further simulated with the developed numerical model. The comparison between field and numerical data indicated that the numerical model coupling of the non-linear variation of soil parameters gave more reasonable results. The developed numerical model is capable to analyze engineering cases with complex operating conditions.

  8. Asymptotic Behaviour of Total Generalised Variation

    KAUST Repository

    Papafitsoros, Konstantinos; Valkonen, Tuomo

    2015-01-01

    © Springer International Publishing Switzerland 2015. The recently introduced second order total generalised variation functional TGV2 β,α has been a successful regulariser for image processing purposes. Its definition involves two positive parameters α and β whose values determine the amount and the quality of the regularisation. In this paper we report on the behaviour of TGV2 β,α in the cases where the parameters α, β as well as their ratio β/α becomes very large or very small. Among others, we prove that for sufficiently symmetric two dimensional data and large ratio β/α, TGV2 β,α regularisation coincides with total variation (TV) regularization

  9. diurnal variation in blood parameters in the chicken in the hot

    African Journals Online (AJOL)

    Dr Olaleye

    Twelve adult male chicken of the Nigerian local strain were bled every 3 Hours for 24 hours. Haematological and serum biochemical parameters were measured in the samples collected. Variations in the levels of these parameters throughout the 24 hours were determined. Thirteen out of the parameters measured showed.

  10. Low-dose 4D cone-beam CT via joint spatiotemporal regularization of tensor framelet and nonlocal total variation

    Science.gov (United States)

    Han, Hao; Gao, Hao; Xing, Lei

    2017-08-01

    Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.

  11. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  12. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  13. Continuous radon measurements in schools: time variations and related parameters

    International Nuclear Information System (INIS)

    Giovani, C.; Cappelletto, C.; Garavaglia, M.; Pividore, S.; Villalta, R.

    2004-01-01

    Some results are reported of observations made within a four-year survey, during different seasons and in different conditions of school building use. Natural radon variations (day-night cycles, seasonal and temperature dependent variations etc..) and artificial ones (opening of windows, weekends and vacations, deployment of air conditioning or heating systems. etc.) were investigated as parameters affecting time dependent radon concentrations. (P.A.)

  14. Impacts of clustering on noise-induced spiking regularity in the excitatory neuronal networks of subnetworks.

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2015-01-01

    In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.

  15. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  16. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  17. Iterative method of the parameter variation for solution of nonlinear functional equations

    International Nuclear Information System (INIS)

    Davidenko, D.F.

    1975-01-01

    The iteration method of parameter variation is used for solving nonlinear functional equations in Banach spaces. The authors consider some methods for numerical integration of ordinary first-order differential equations and construct the relevant iteration methods of parameter variation, both one- and multifactor. They also discuss problems of mathematical substantiation of the method, study the conditions and rate of convergence, estimate the error. The paper considers the application of the method to specific functional equations

  18. Biosphere modelling for a HLW repository - scenario and parameter variations

    International Nuclear Information System (INIS)

    Grogan, H.

    1985-03-01

    In Switzerland high-level radioactive wastes have been considered for disposal in deep-lying crystalline formations. The individual doses to man resulting from radionuclides entering the biosphere via groundwater transport are calculated. The main recipient area modelled, which constitutes the base case, is a broad gravel terrace sited along the south bank of the river Rhine. An alternative recipient region, a small valley with a well, is also modelled. A number of parameter variations are performed in order to ascertain their impact on the doses. Finally two scenario changes are modelled somewhat simplistically, these consider different prevailing climates, namely tundra and a warmer climate than present. In the base case negligibly low doses to man in the long term, resulting from the existence of a HLW repository have been calculated. Cs-135 results in the largest dose (8.4E-7 mrem/y at 6.1E+6 y) while Np-237 gives the largest dose from the actinides (3.6E-8 mrem/y). The response of the model to parameter variations cannot be easily predicted due to non-linear coupling of many of the parameters. However, the calculated doses were negligibly low in all cases as were those resulting from the two scenario variations. (author)

  19. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  20. Real time QRS complex detection using DFA and regular grammar.

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed Hedi

    2017-02-28

    The sequence of Q, R, and S peaks (QRS) complex detection is a crucial procedure in electrocardiogram (ECG) processing and analysis. We propose a novel approach for QRS complex detection based on the deterministic finite automata with the addition of some constraints. This paper confirms that regular grammar is useful for extracting QRS complexes and interpreting normalized ECG signals. A QRS is assimilated to a pair of adjacent peaks which meet certain criteria of standard deviation and duration. The proposed method was applied on several kinds of ECG signals issued from the standard MIT-BIH arrhythmia database. A total of 48 signals were used. For an input signal, several parameters were determined, such as QRS durations, RR distances, and the peaks' amplitudes. σRR and σQRS parameters were added to quantify the regularity of RR distances and QRS durations, respectively. The sensitivity rate of the suggested method was 99.74% and the specificity rate was 99.86%. Moreover, the sensitivity and the specificity rates variations according to the Signal-to-Noise Ratio were performed. Regular grammar with the addition of some constraints and deterministic automata proved functional for ECG signals diagnosis. Compared to statistical methods, the use of grammar provides satisfactory and competitive results and indices that are comparable to or even better than those cited in the literature.

  1. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  2. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan; Yang, Yongliang; Liu, Han; Mitra, Niloy J.

    2013-01-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  3. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan

    2013-05-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  4. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  5. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  6. The effect of statistical analytical measurement variations on the plant control parameters and production costs in cement manufacturing – a case study

    Directory of Open Access Journals (Sweden)

    A. D. Love

    2010-01-01

    Full Text Available Raw materials used in cement manufacturing normally have varying chemical compositions and require regular analyses for plant control purposes. This is achieved by using several analytical instruments, such as XRF and ICP. The values obtained for the major elements Ca, Si, Fe and Al, are used to calculate the plant control parameters Lime Saturation Factor (LSF, Silica Ratio (SR and Alumina Modulus (AM. These plant control parameters are used to regulate the mixing and blending of various raw meal components and to operate the plant optimally. Any errors and large fluctuations in these plant parameters not only influence the quality of the cement produced, but also have a major effect on the cost of production of cement clinker through their influence on the energy consumption and residence time in the kiln. This paper looks at the role that statistical variances in the analytical measurements of the major elements Ca, Si, Fe and Al can have on the ultimate LSF, SR and AM values calculated from these measurements. The influence of too high and too low values of the LSF, SR and AM on clinker quality and energy consumption is discussed, and acceptable variances in these three parameters, based on plant experiences, are established. The effect of variances in the LSF, SR and AM parameters on the production costs is then analysed, and it is shown that variations of as large as 30% and as little as 5% can potentially occur. The LSF calculation incorporates most chemical elements and therefore is prone to the largest number of variations due to statistical variances in the analytical determinations of the chemical elements. Despite all these variations in LSF values they actually produced the smallest influence on the production cost of the clinker. It is therefore concluded that the LSF value is the most practical parameter for plant control purposes.

  7. Material parameters characterization for arbitrary N-sided regular polygonal invisible cloak

    International Nuclear Information System (INIS)

    Wu Qun; Zhang Kuang; Meng Fanyi; Li Lewei

    2009-01-01

    Arbitrary N-sided regular polygonal cylindrical cloaks are proposed and designed based on the coordinate transformation theory. First, the general expressions of constitutive tensors of the N-sided regular polygonal cylindrical cloaks are derived, then there are some full-wave simulations of the cloaks that are composed of inhomogeneous and anisotropic metamaterials, which will bend incoming electromagnetic waves and guide them to propagate around the inner region; such electromagnetic waves will return to their original propagation directions without distorting the waves outside the polygonal cloak. The results of full-wave simulations validate the general expressions of constitutive tensors of the N-sided regular polygonal cylindrical cloaks we derived.

  8. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  9. Study of radon emanation variations in Morocco soil, correlations with seismic activities and atmospheric parameters

    International Nuclear Information System (INIS)

    Boukhal, H.; Cherkaoui, T.E.; Lferde, M.

    1994-01-01

    In order to verify the possibility of radon signal use in earthquake prediction, a study of radon emanation variation in soil was undertaken. Regular measurements have been carried out in five cities of Morocco ( Rabat, Tetouan, Ifrane, Khouribga, Berchid). The measuring method is based on the solid state nuclear track detectors technique. The good correlation between the different seismic activities and the variations of radon emanation rate in the five stations, have shown the interest of radon use in the earthquake prediction. 1 tab., 2 figs., 2 refs. (author)

  10. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  11. Rules of parameter variation in homotype series of birdsong can indicate a 'sollwert' significance.

    Science.gov (United States)

    Hultsch, H; Todt, D

    1996-11-01

    Various bird species produce songs which include homotype pattern series, i.e. segments composed of a number of repeated vocal units. We compared such units and analyzed the variation of their parameters, especially in the time and the frequency domain. In addition, we examined whether and how serial changes of both the range and the trend of variation were related to song constituents following the repetitions. Data evaluation showed that variation of specific serial parameters (e.g., unit pitch or unit duration) occurring in the whistle song-types of nightingales (Luscinia megarhynchos) were converging towards a distinct terminal value. Although song-types differed in this terminal value, it was found to play the role of a key cue ('sollwert'). The continuation of a song depended on a preceding attainment of its specific 'sollwert'. Our results suggest that the study of signal parameters and rules of their variations make a useful tool for the behavioral access to the properties of the control systems mediating serial signal performances.

  12. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  13. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  14. Seasonal variation of photosynthetic model parameters and leaf area index from global Fluxnet eddy covariance data

    Science.gov (United States)

    Groenendijk, M.; Dolman, A. J.; Ammann, C.; Arneth, A.; Cescatti, A.; Dragoni, D.; Gash, J. H. C.; Gianelle, D.; Gioli, B.; Kiely, G.; Knohl, A.; Law, B. E.; Lund, M.; Marcolla, B.; van der Molen, M. K.; Montagnani, L.; Moors, E.; Richardson, A. D.; Roupsard, O.; Verbeeck, H.; Wohlfahrt, G.

    2011-12-01

    Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (α) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a seasonally varying leaf area index (LAI) explains the parameter variation within and between PFTs. Using Fluxnet data, we simulate a seasonally variable LAIF for a large range of sites, comparable to the LAIM derived from MODIS. There are discrepancies when LAIF reach zero levels and LAIM still provides a small positive value. We find that temperature is the most common constraint for LAIF in 55% of the simulations, while global radiation and vapor pressure deficit are the key constraints for 18% and 27% of the simulations, respectively, while large differences in this forcing still exist when looking at specific PFTs. Despite these differences, the annual photosynthesis simulations are comparable when using LAIF or LAIM (r2 = 0.89). We investigated further the seasonal variation of ecosystem-scale parameters derived with LAIF. Vcm has the largest seasonal variation. This holds for all vegetation types and climates. The parameter α is less variable. By including ecosystem-scale parameter seasonality we can explain a considerable part of the ecosystem-scale parameter variation between PFTs. The remaining unexplained leaf-scale PFT variation still needs further work, including elucidating the precise role of leaf and soil level nitrogen.

  15. Geostatistical Characteristic of Space -Time Variation in Underground Water Selected Quality Parameters in Klodzko Water Intake Area (SW Part of Poland)

    Science.gov (United States)

    Namysłowska-Wilczyńska, Barbara

    2016-04-01

    . These data were subjected to spatial analyses using statistical and geostatistical methods. The evaluation of basic statistics of the investigated quality parameters, including their histograms of distributions, scatter diagrams between these parameters and also correlation coefficients r were presented in this article. The directional semivariogram function and the ordinary (block) kriging procedure were used to build the 3D geostatistical model. The geostatistical parameters of the theoretical models of directional semivariograms of the studied water quality parameters, calculated along the time interval and along the wells depth (taking into account the terrain elevation), were used in the ordinary (block) kriging estimation. The obtained results of estimation, i.e. block diagrams allowed to determine the levels of increased values Z* of studied underground water quality parameters. Analysis of the variability in the selected quality parameters of underground water for an analyzed area in Klodzko water intake was enriched by referring to the results of geostatistical studies carried out for underground water quality parameters and also for a treated water and in Klodzko water supply system (iron Fe, manganese Mn, ammonium ion NH4+ contents), discussed in earlier works. Spatial and time variation in the latter-mentioned parameters was analysed on the basis of the data (2007÷2011, 2008÷2011). Generally, the behaviour of the underground water quality parameters has been found to vary in space and time. Thanks to the spatial analyses of the variation in the quality parameters in the Kłodzko underground water intake area some regularities (trends) in the variation in water quality have been identified.

  16. Regularizing Unpredictable Variation: Evidence from a Natural Language Setting

    Science.gov (United States)

    Hendricks, Alison Eisel; Miller, Karen; Jackson, Carrie N.

    2018-01-01

    While previous sociolinguistic research has demonstrated that children faithfully acquire probabilistic input constrained by sociolinguistic and linguistic factors (e.g., gender and socioeconomic status), research suggests children regularize inconsistent input-probabilistic input that is not sociolinguistically constrained (e.g., Hudson Kam &…

  17. Haematology and Serum Biochemistry Parameters and Variations in the Eurasian Beaver (Castor fiber).

    Science.gov (United States)

    Girling, Simon J; Campbell-Palmer, Roisin; Pizzi, Romain; Fraser, Mary A; Cracknell, Jonathan; Arnemo, Jon; Rosell, Frank

    2015-01-01

    Haematology parameters (N = 24) and serum biochemistry parameters (N = 35) were determined for wild Eurasian beavers (Castor fiber), between 6 months - 12 years old. Of the population tested in this study, N = 18 Eurasian beavers were from Norway and N = 17 originating from Bavaria but now living extensively in a reserve in England. All blood samples were collected from beavers via the ventral tail vein. All beavers were chemically restrained using inhalant isoflurane in 100% oxygen prior to blood sampling. Results were determined for haematological and serum biochemical parameters for the species and were compared between the two different populations with differences in means estimated and significant differences being noted. Standard blood parameters for the Eurasian beaver were determined and their ranges characterised using percentiles. Whilst the majority of blood parameters between the two populations showed no significant variation, haemoglobin, packed cell volume, mean cell haemoglobin and white blood cell counts showed significantly greater values (pbeavers or between sexually immature (beavers in the animals sampled. With Eurasian beaver reintroduction encouraged by legislation throughout Europe, knowledge of baseline blood values for the species and any variations therein is essential when assessing their health and welfare and the success or failure of any reintroduction program. This is the first study to produce base-line blood values and their variations for the Eurasian beaver.

  18. Physicochemical parameters and seasonal variation of coastal water from Balochistan coast, Pakistan

    Directory of Open Access Journals (Sweden)

    Naeema Elahi

    2015-03-01

    Full Text Available Objective: To determine common physico-chemical parameters of coastal water. Methods: Physicochemical properties of water were determined according to the standards of the American Public Health Association. Generally, all those parameters were recorded a small variation between stations. The variation in physico-chemical parameters like salinity, temperature, dissolved oxygen and pH at Gwadar (Coastal water of Balochistan were recorded. Results: The range of air temperature of coastal water of Balochistan during 2004 and 2006 varies from 25 ºC to 37 ºC, water temperature ranged from 15.00 ºC to 33.00 ºC, pH ranged from 7.08 to 8.95, salinity ranged from 37.4‰ to 41.3‰ and dissolved oxygen ranged from 5.32 to 8.67 mg/L. Conclusions: Results showed that these parameters of Balochistan coast of Pakistan is not dangerous for marine habitat and the use of these parameters in monitoring programs to assess ecosystem health has the potential to inform the general public and decision-makers about the state of the coastal ecosystems. To save this vital important habitat, the government agencies and scientists should work with proper attention.

  19. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  20. Inter-temporal variation in the travel time and travel cost parameters of transport models

    OpenAIRE

    Börjesson, Maria

    2012-01-01

    The parameters for travel time and travel cost are central in travel demand forecasting models. Since valuation of infrastructure investments requires prediction of travel demand for future evaluation years, inter-temporal variation of the travel time and travel cost parameters is a key issue in forecasting. Using two identical stated choice experiments conducted among Swedish drivers with an interval of 13 years, 1994 and 2007, this paper estimates the inter-temporal variation in travel time...

  1. Directional Total Generalized Variation Regularization for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    2017-01-01

    this regularizer for directional images is highly advantageous. In order to estimate directions in impulse noise corrupted images, which is much more challenging compared to Gaussian noise corrupted images, we introduce a new Fourier transform-based method. Numerical experiments show that this method is more...

  2. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  3. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  4. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  5. High frequency variations of Earth Rotation Parameters from GPS and GLONASS observations.

    Science.gov (United States)

    Wei, Erhu; Jin, Shuanggen; Wan, Lihua; Liu, Wenjie; Yang, Yali; Hu, Zhenghong

    2015-01-28

    The Earth's rotation undergoes changes with the influence of geophysical factors, such as Earth's surface fluid mass redistribution of the atmosphere, ocean and hydrology. However, variations of Earth Rotation Parameters (ERP) are still not well understood, particularly the short-period variations (e.g., diurnal and semi-diurnal variations) and their causes. In this paper, the hourly time series of Earth Rotation Parameters are estimated using Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), and combining GPS and GLONASS data collected from nearly 80 sites from 1 November 2012 to 10 April 2014. These new observations with combining different satellite systems can help to decorrelate orbit biases and ERP, which improve estimation of ERP. The high frequency variations of ERP are analyzed using a de-trending method. The maximum of total diurnal and semidiurnal variations are within one milli-arcseconds (mas) in Polar Motion (PM) and 0.5 milli-seconds (ms) in UT1-UTC. The semidiurnal and diurnal variations are mainly related to the ocean tides. Furthermore, the impacts of satellite orbit and time interval used to determinate ERP on the amplitudes of tidal terms are analyzed. We obtain some small terms that are not described in the ocean tide model of the IERS Conventions 2010, which may be caused by the strategies and models we used or the signal noises as well as artifacts. In addition, there are also small differences on the amplitudes between our results and IERS convention. This might be a result of other geophysical excitations, such as the high-frequency variations in atmospheric angular momentum (AAM) and hydrological angular momentum (HAM), which needs more detailed analysis with more geophysical data in the future.

  6. ANALYSIS THE DIURNAL VARIATIONS ON SELECTED PHYSICAL AND PHYSIOLOGICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    A. MAHABOOBJAN

    2010-12-01

    Full Text Available The purpose of the study was to analyze the diurnal variations on selected physical and physiological parameters such as speed, explosive power, resting heart rate and breath holding time among college students. To achieve the purpose of this study, a total of twenty players (n=20 from Government Arts College, Salem were selected as subjects To study the diurnal variation of the players on selected physiological and performance variables, the data were collected 4 times a day with every four hours in between the times it from 6.00 to 18.00 hours were selected as another categorical variable. One way repeated measures (ANOVA was used to analyze the data. If the obtained F-ratio was significant, Seheffe’s post-hoc test was used to find out the significant difference if anyamong the paired means. The level of significance was fixed at.05 level. It has concluded that both physical and physiological parameters were significantly deferred with reference to change of temperature in a day

  7. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  8. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  9. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  10. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  11. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  12. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  13. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jeró nimo; Hoteit, Ibrahim

    2012-01-01

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected

  14. Comparison of clinical parameters and environmental noise levels between regular surgery and piezosurgery for extraction of impacted third molars.

    Science.gov (United States)

    Chang, Hao-Hueng; Lee, Ming-Shu; Hsu, You-Chyun; Tsai, Shang-Jye; Lin, Chun-Pin

    2015-10-01

    Impacted third molars can be extracted by regular surgery or piezosurgery. The aim of this study was to compare clinical parameters and device-produced noise levels between regular surgery and piezosurgery for the extraction of impacted third molars. Twenty patients (18 women and 2 men, 17-29 years of age) with bilateral symmetrical impacted mandibular or maxillary third molars of the same level were included in this randomized crossover clinical trial. The 40 impacted third molars were divided into a control group (n = 20), in which the third molar was extracted by regular surgery using a high-speed handpiece and an elevator, and an experimental group (n = 20), in which the third molar was extracted by piezosurgery using a high-speed handpiece and a piezotome. The clinical parameters were evaluated by a self-reported questionnaire. The noise levels produced by the high-speed handpiece and piezotome were measured and compared between the experimental and control groups. Patients in the experimental group had a better feeling about tooth extraction and force delivery during extraction and less facial swelling than patients in the control group. However, there were no significant differences in noise-related disturbance, extraction period, degree of facial swelling, pain score, pain duration, any noise levels produced by the devices under different circumstances during tooth extraction between the control and experimental groups. The piezosurgery device produced noise levels similar to or lower than those of the high-speed drilling device. However, piezosurgery provides advantages of increased patient comfort during extraction of impacted third molars. Copyright © 2014. Published by Elsevier B.V.

  15. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  16. High Frequency Variations of Earth Rotation Parameters from GPS and GLONASS Observations

    Directory of Open Access Journals (Sweden)

    Erhu Wei

    2015-01-01

    Full Text Available The Earth’s rotation undergoes changes with the influence of geophysical factors, such as Earth’s surface fluid mass redistribution of the atmosphere, ocean and hydrology. However, variations of Earth Rotation Parameters (ERP are still not well understood, particularly the short-period variations (e.g., diurnal and semi-diurnal variations and their causes. In this paper, the hourly time series of Earth Rotation Parameters are estimated using Global Positioning System (GPS, Global Navigation Satellite System (GLONASS, and combining GPS and GLONASS data collected from nearly 80 sites from 1 November 2012 to 10 April 2014. These new observations with combining different satellite systems can help to decorrelate orbit biases and ERP, which improve estimation of ERP. The high frequency variations of ERP are analyzed using a de-trending method. The maximum of total diurnal and semidiurnal variations are within one milli-arcseconds (mas in Polar Motion (PM and 0.5 milli-seconds (ms in UT1-UTC. The semidiurnal and diurnal variations are mainly related to the ocean tides. Furthermore, the impacts of satellite orbit and time interval used to determinate ERP on the amplitudes of tidal terms are analyzed. We obtain some small terms that are not described in the ocean tide model of the IERS Conventions 2010, which may be caused by the strategies and models we used or the signal noises as well as artifacts. In addition, there are also small differences on the amplitudes between our results and IERS convention. This might be a result of other geophysical excitations, such as the high-frequency variations in atmospheric angular momentum (AAM and hydrological angular momentum (HAM, which needs more detailed analysis with more geophysical data in the future.

  17. Stochastic differential equations as a tool to regularize the parameter estimation problem for continuous time dynamical systems given discrete time measurements.

    Science.gov (United States)

    Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats

    2014-05-01

    In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  19. Multi-parameter variational calculations for the (2+1)-dimensional U(1) lattice gauge theory and the XY model

    International Nuclear Information System (INIS)

    Heys, D.W.; Stump, D.R.

    1987-01-01

    Variational calculations are described that use multi-parameter trial wave functions for the U(1) lattice gauge theory in two space dimensions, and for the XY model. The trial functions are constructed as the exponential of a linear combination of states from the strong-coupling basis of the model, with the coefficients treated as variational parameters. The expectation of the hamiltonian is computed by the Monte Carlo method, using a reweighting technique to evaluate expectation values in finite patches of the parameter space. The trial function for the U(1) gauge theory involves six variational parameters, and its weak-coupling behaviour is in reasonable agreement with theoretical expectations. (orig.)

  20. Time Variations of the Radial Velocity of H2O Masers in the Semi-Regular Variable R Crt

    Science.gov (United States)

    Sudou, Hiroshi; Shiga, Motoki; Omodaka, Toshihiro; Nakai, Chihiro; Ueda, Kazuki; Takaba, Hiroshi

    2017-12-01

    H2O maser emission {at 22 GHz} in the circumstellar envelope is one of the good tracers of detailed physics and inematics in the mass loss process of asymptotic giant branch stars. Long-term monitoring of an H2O maser spectrum with high time resolution enables us to clarify acceleration processes of the expanding shell in the stellar atmosphere. We monitored the H2O maser emission of the semi-regular variable R Crt with the Kagoshima 6-m telescope, and obtained a large data set of over 180 maser spectra over a period of 1.3 years with an observational span of a few days. Using an automatic peak detection method based on least-squares fitting, we exhaustively detected peaks as significant velocity components with the radial velocity on a 0.1 km s^{-1} scale. This analysis result shows that the radial velocity of red-shifted and blue-shifted components exhibits a change between acceleration and deceleration on the time scale of a few hundred days. These velocity variations are likely to correlate with intensity variations, in particular during flaring state of H2O masers. It seems reasonable to consider that the velocity variation of the maser source is caused by shock propagation in the envelope due to stellar pulsation.However, it is difficult to explain the relationship between the velocity variation and the intensity variation only from shock propagation effects. We found that a time delay of the integrated maser intensity with respect to the optical light curve is about 150 days.

  1. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  2. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  3. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    Science.gov (United States)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  4. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  5. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  6. Seasonal and spatial variation in broadleaf forest model parameters

    Science.gov (United States)

    Groenendijk, M.; van der Molen, M. K.; Dolman, A. J.

    2009-04-01

    Process based, coupled ecosystem carbon, energy and water cycle models are used with the ultimate goal to project the effect of future climate change on the terrestrial carbon cycle. A typical dilemma in such exercises is how much detail the model must be given to describe the observations reasonably realistic while also be general. We use a simple vegetation model (5PM) with five model parameters to study the variability of the parameters. These parameters are derived from the observed carbon and water fluxes from the FLUXNET database. For 15 broadleaf forests the model parameters were derived for different time resolutions. It appears that in general for all forests, the correlation coefficient between observed and simulated carbon and water fluxes improves with a higher parameter time resolution. The quality of the simulations is thus always better when a higher time resolution is used. These results show that annual parameters are not capable of properly describing weather effects on ecosystem fluxes, and that two day time resolution yields the best results. A first indication of the climate constraints can be found by the seasonal variation of the covariance between Jm, which describes the maximum electron transport for photosynthesis, and climate variables. A general seasonality we found is that during winter the covariance with all climate variables is zero. Jm increases rapidly after initial spring warming, resulting in a large covariance with air temperature and global radiation. During summer Jm is less variable, but co-varies negatively with air temperature and vapour pressure deficit and positively with soil water content. A temperature response appears during spring and autumn for broadleaf forests. This shows that an annual model parameter cannot be representative for the entire year. And relations with mean annual temperature are not possible. During summer the photosynthesis parameters are constrained by water availability, soil water content and

  7. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  8. Haematology and Serum Biochemistry Parameters and Variations in the Eurasian Beaver (Castor fiber.

    Directory of Open Access Journals (Sweden)

    Simon J Girling

    Full Text Available Haematology parameters (N = 24 and serum biochemistry parameters (N = 35 were determined for wild Eurasian beavers (Castor fiber, between 6 months - 12 years old. Of the population tested in this study, N = 18 Eurasian beavers were from Norway and N = 17 originating from Bavaria but now living extensively in a reserve in England. All blood samples were collected from beavers via the ventral tail vein. All beavers were chemically restrained using inhalant isoflurane in 100% oxygen prior to blood sampling. Results were determined for haematological and serum biochemical parameters for the species and were compared between the two different populations with differences in means estimated and significant differences being noted. Standard blood parameters for the Eurasian beaver were determined and their ranges characterised using percentiles. Whilst the majority of blood parameters between the two populations showed no significant variation, haemoglobin, packed cell volume, mean cell haemoglobin and white blood cell counts showed significantly greater values (p<0.01 in the Bavarian origin population than the Norwegian; neutrophil counts, alpha 2 globulins, cholesterol, sodium: potassium ratios and phosphorus levels showed significantly (p<0.05 greater values in Bavarian versus Norwegian; and potassium, bile acids, gamma globulins, urea, creatinine and total calcium values levels showed significantly (p<0.05 greater values in Norwegian versus Bavarian relict populations. No significant differences were noted between male and female beavers or between sexually immature (<3 years old and sexually mature (≥3 years old beavers in the animals sampled. With Eurasian beaver reintroduction encouraged by legislation throughout Europe, knowledge of baseline blood values for the species and any variations therein is essential when assessing their health and welfare and the success or failure of any reintroduction program. This is the first study to produce

  9. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  10. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  11. Long-period and short-period variations of ionospheric parameters studied from complex observations performed on Cuba

    Energy Technology Data Exchange (ETDEWEB)

    Laso, B; Lobachevskii, L A; Potapova, N I; Freizon, I A; Shapiro, B S

    1980-09-01

    Cuban data from 1978 are used to study long-period (i.e., diurnal) variations of Doppler shift on a 3000 km path at frequencies of 10 and 15 MHz these variations are related to variations of parameters on the ionospheric path. Short-period variations were also studied on the basis of Doppler shift data and vertical sounding data in the 0.000111-0.00113 Hz frequency range. The relation between the observed variations and internal gravity waves are discussed.

  12. Identification of a set of macroscopic elastic parameters in a 3D woven composite: Uncertainty analysis and regularization

    KAUST Repository

    Gras, Renaud

    2015-03-01

    Performing a single but complex mechanical test on small structures rather than on coupons to probe multiple strain states/histories for identification purposes is nowadays possible thanks to full-field measurements. The aim is to identify many parameters thanks to the heterogeneity of mechanical fields. Such an approach is followed herein, focusing on a blade root made of 3D woven composite. The performed test, which is analyzed using global Digital Image Correlation (DIC), provides heterogeneous kinematic fields due to the particular shape of the sample. This displacement field is further processed to identify the four in-plane material parameters of the macroscopic equivalent orthotropic behavior. The key point, which may limit the ability to draw reliable conclusions, is the presence of acquisition noise in the original images that has to be tracked along the DIC/identification processing to provide uncertainties on the identified parameters. A further regularization based on a priori knowledge is finally introduced to compensate for possible lack of experimental information needed for completing the identification.

  13. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  14. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  15. Determination of heat transfer parameters by use of finite integral transform and experimental data for regular geometric shapes

    Science.gov (United States)

    Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad

    2017-12-01

    This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.

  16. Statistical study of interplanetary condition effect on geomagnetic storms: 2. Variations of parameters

    Science.gov (United States)

    Yermolaev, Yu. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Yu.

    2011-02-01

    We investigate the behavior of mean values of the solar wind’s and interplanetary magnetic field’s (IMF) parameters and their absolute and relative variations during the magnetic storms generated by various types of the solar wind. In this paper, which is a continuation of paper [1], we, on the basis of the OMNI data archive for the period of 1976-2000, have analyzed 798 geomagnetic storms with D st ≤ -50 nT and their interplanetary sources: corotating interaction regions CIR, compression regions Sheath before the interplanetary CMEs; magnetic clouds MC; “Pistons” Ejecta, and an uncertain type of a source. For the analysis the double superposed epoch analysis method was used, in which the instants of the magnetic storm onset and the minimum of the D st index were taken as reference times. It is shown that the set of interplanetary sources of magnetic storms can be sub-divided into two basic groups according to their slowly and fast varying characteristics: (1) ICME (MC and Ejecta) and (2) CIR and Sheath. The mean values, the absolute and relative variations in MC and Ejecta for all parameters appeared to be either mean or lower than the mean value (the mean values of the electric field E y and of the B z component of IMF are higher in absolute value), while in CIR and Sheath they are higher than the mean value. High values of the relative density variation sN/ are observed in MC. At the same time, the high values for relative variations of the velocity, B z component, and IMF magnitude are observed in Sheath and CIR. No noticeable distinctions in the relationships between considered parameters for moderate and strong magnetic storms were observed.

  17. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jerónimo

    2012-09-19

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.

  18. Discharge regularity in the turtle posterior crista: comparisons between experiment and theory.

    Science.gov (United States)

    Goldberg, Jay M; Holt, Joseph C

    2013-12-01

    Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively.

  19. Summation of Divergent Series and Zeldovich's Regularization Method

    International Nuclear Information System (INIS)

    Mur, V.D.; Pozdnyakov, S.G.; Popruzhenko, S.V.; Popov, V.S.

    2005-01-01

    A method for summing divergent series, including perturbation-theory series, is considered. This method is an analog of Zeldovich's regularization method in the theory of quasistationary states. It is shown that the method in question is more powerful than the well-known Abel and Borel methods, but that it is compatible with them (that is, it leads to the same value for the sum of a series). The constraints on the parameter domain that arise upon the removal of the regularization of divergent integrals by this method are discussed. The dynamical Stark shifts and widths of loosely bound s states in the field of a circularly polarized electromagnetic wave are calculated at various values of the Keldysh adiabaticity parameter and the multiquantum parameter

  20. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  1. Variations of interplanetary parameters and cosmic-ray intensities

    International Nuclear Information System (INIS)

    Geranios, A.

    1980-01-01

    Observations of cosmic ray intensity depressions by earth bound neutron monitors and measurements of interplanetary parameter's variations aboard geocentric satellites in the period January 1972-July 1974 are analysed and grouped according to their correlation among them. From this analysis of about 30 cases it came out that the majority of the depressions correlates with the average propagation speed of interplanetary shocks as well as with the amplitude of the interplanetary magnetic field after the eruption of a solar flare. About one fourth of the events correlates with corotating fast solar wind streams. As the recovery time of the shock-related depressions depends strongly on the heliographic longitude of the causitive solar flare, it seems that the cosmic ray modulation region has a corotative-like feature. (Auth.)

  2. Spatial variations of order parameter around Kondo impurity for T<=Tsub(c)

    International Nuclear Information System (INIS)

    Yoksan, S.

    1980-04-01

    Analytic expressions for the spatial variations of the order parameter around a Kondo impurity are obtained. The oscillatory contribution due to the impurity scattering is calculated using the t matrix of Matsuura which conveniently yields the general results below Tsub(c). Differences between our values and those of Schlottmann are reported. (author)

  3. Use of regularization method in the determination of ring parameters and orbit correction

    International Nuclear Information System (INIS)

    Tang, Y.N.; Krinsky, S.

    1993-01-01

    We discuss applying the regularization method of Tikhonov to the solution of inverse problems arising in accelerator operations. This approach has been successfully used for orbit correction on the NSLS storage rings, and is presently being applied to the determination of betatron functions and phases from the measured response matrix. The inverse problem of differential equation often leads to a set of integral equations of the first kind which are ill-conditioned. The regularization method is used to combat the ill-posedness

  4. Hypocaloric diet and regular moderate aerobic exercise is an effective strategy to reduce anthropometric parameters and oxidative stress in obese patients.

    Science.gov (United States)

    Gutierrez-Lopez, Liliana; Garcia-Sanchez, Jose Ruben; Rincon-Viquez, Maria de Jesus; Lara-Padilla, Eleazar; Sierra-Vargas, Martha P; Olivares-Corichi, Ivonne M

    2012-01-01

    Studies show that diet and exercise are important in the treatment of obesity. The aim of this study was to determine whether additional regular moderate aerobic exercise during a treatment with hypocaloric diet has a beneficial effect on oxidative stress and molecular damage in the obese patient. Oxidative stress of 16 normal-weight (NW) and 32 obese 1 (O1) subjects (BMI 30-34.9 kg/m(2)) were established by biomarkers of oxidative stress in plasma. Recombinant human insulin was incubated with blood from NW or O1 subjects, and the molecular damage to the hormone was analyzed. Two groups of treatment, hypocaloric diet (HD) and hypocaloric diet plus regular moderate aerobic exercise (HDMAE), were formed, and their effects in obese subjects were analyzed. The data showed the presence of oxidative stress in O1 subjects. Molecular damage and polymerization of insulin was observed more frequently in the blood from O1 subjects. The treatment of O1 subjects with HD decreased the anthropometric parameters as well as oxidative stress and molecular damage, which was more effectively prevented by the treatment with HDMAE. HD and HDMAE treatments decreased anthropometric parameters, oxidative stress, and molecular damage in O1 subjects. Copyright © 2012 S. Karger GmbH, Freiburg.

  5. Superfluid to normal phase transition and extreme regularity od superdeformed bands

    CERN Document Server

    Pavlichenkov, I M

    2002-01-01

    The exact semiclassical expression for the second inertial parameter B for the superfluid and normal phases is derived. Interpolation between these limiting values shows that the function B(I) changes sign at the spin I sub c , which is critical for a rotational spectrum. The quantity B turns out to be a sensitive measure of the change in static pairing correlations. The superfluid-to-normal transition reveals itself in the specific variation of the ratio B/A versus spin I with the plateau characteristic of the normal phase. This dependence is find to be universal for normal deformed and superdeformed nuclei. The long plateau with a small value B/A approx A sup - sup 8 sup / sup 3 explains the extreme regularity of superdeformed bands

  6. Comparison of hemodynamic and nutritional parameters between older persons practicing regular physical activity, nonsmokers and ex-smokers

    Directory of Open Access Journals (Sweden)

    Rebelatto Marcelo N

    2010-11-01

    Full Text Available Abstract Background Sedentary lifestyle combined with smoking, contributes to the development of a set of chronic diseases and to accelerating the course of aging. The aim of the study was to compare the hemodynamic and nutritional parameters between elderly persons practicing regular physical activity, nonsmokers and ex-smokers. Methods The sample was comprised of 40 elderly people practicing regular physical activity for 12 months, divided into a Nonsmoker Group and an Ex-smoker Group. During a year four trimestrial evaluations were performed, in which the hemodynamic (blood pressure, heart rate- HR and VO2 and nutritional status (measured by body mass index data were collected. The paired t-test and t-test for independent samples were applied in the intragroup and intergroup analysis, respectively. Results The mean age of the groups was 68.35 years, with the majority of individuals in the Nonsmoker Group being women (n = 15 and the Ex-smoker Group composed of men (n = 11. In both groups the variables studied were within the limits of normality for the age. HR was diminished in the Nonsmoker Group in comparison with the Ex-smoker Group (p = 0.045 between the first and last evaluation. In the intragroup analysis it was verified that after one year of exercise, there was significant reduction in the HR in the Nonsmoker Group (p = 0.002 and a significant increase in VO2 for the Ex-smoker Group (p = 0.010. There are no significant differences between the hemodynamic and nutritional conditions in both groups. Conclusion In elderly persons practicing regular physical activity, it was observed that the studied variables were maintained over the course of a year, and there was no association with the history of smoking, except for HR and VO2.

  7. Summation of divergent series and Zel'dovich's regularization method

    International Nuclear Information System (INIS)

    Mur, V.D.; Pozdnyakov, S.G.; Popruzhenko, S.V.; Popov, V.S.

    2005-01-01

    The method of summation of divergent series, including series of a perturbation theory, which is an analog of the Zel'dovich regularization procedure in the theory of quasistationary states is considered. It is shown that this method is more powerful than the well-known Abel and Borel methods, but compatible with them (i. e., gives the same value for the sum of the series). The restrictions to the range of parameters which appear after removal of the regularization of integrals by this method are discussed. The dynamical Stark shifts and widths of weakly bound s states in a field of circularly polarized electromagnetic wave are calculated at different values of the Keldysh adiabaticity parameter and multiquantum parameter [ru

  8. Variations of some parameters of enzyme induction in chemical workers

    Energy Technology Data Exchange (ETDEWEB)

    Dolara, P. (Univ. of Florence, Italy); Lodovici, M.; Buffoni, F.; Buiatti, E.; Baccetti, S.; Ciofini, O.; Bavazzano, P.; Barchielli, S.; Vannucci, V.

    1982-01-01

    Several parameters related to mono-oxygenase activity were followed in a population of chemical workers and controls. Workers exposed to toluene and xylene had a significant increase of urinary glucaric acid, that was correlated with hippuric acid excretion. On the other hand, workers exposed to pigments showed a marked increase of antipyrine half-life. A dose-related decrease of liver N-demethylase was induced in rats by the administration of a mixture of three of the pigments in use in the plant. Serum gamma-glutamyltranspeptidase was decreased in the workers exposed to pigments, but this variation was not statistically significant. The exposure to different chemicals in the workplace seemed to induce a complicated variation of mono-oxygenase levels, some enzyme being inhibited and others induced in the same group of workers. The sensitivity of these workers to toxic effects of chemicals, carcinogenic compounds and drugs seems to differ markedly from the control population.

  9. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing

    2015-11-25

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  10. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing; Shi, Wentao; Lubineau, Gilles

    2015-01-01

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  11. On Gap Functions for Quasi-Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Kouichi Taji

    2008-01-01

    Full Text Available For variational inequalities, various merit functions, such as the gap function, the regularized gap function, the D-gap function and so on, have been proposed. These functions lead to equivalent optimization formulations and are used to optimization-based methods for solving variational inequalities. In this paper, we extend the regularized gap function and the D-gap functions for a quasi-variational inequality, which is a generalization of the variational inequality and is used to formulate generalized equilibrium problems. These extensions are shown to formulate equivalent optimization problems for quasi-variational inequalities and are shown to be continuous and directionally differentiable.

  12. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  13. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  14. Comparison of earthquake source parameters and interseismic plate coupling variations in global subduction zones (Invited)

    Science.gov (United States)

    Bilek, S. L.; Moyer, P. A.; Stankova-Pursley, J.

    2010-12-01

    Geodetically determined interseismic coupling variations have been found in subduction zones worldwide. These coupling variations have been linked to heterogeneities in interplate fault frictional conditions. These connections to fault friction imply that observed coupling variations are also important in influencing details in earthquake rupture behavior. Because of the wealth of newly available geodetic models along many subduction zones, it is now possible to examine detailed variations in coupling and compare to seismicity characteristics. Here we use a large catalog of earthquake source time functions and slip models for moderate to large magnitude earthquakes to explore these connections, comparing earthquake source parameters with available models of geodetic coupling along segments of the Japan, Kurile, Kamchatka, Peru, Chile, and Alaska subduction zones. In addition, we use published geodetic results along the Costa Rica margin to compare with source parameters of small magnitude earthquakes recorded with an onshore-offshore network of seismometers. For the moderate to large magnitude earthquakes, preliminary results suggest a complex relationship between earthquake parameters and estimates of strongly and weakly coupled segments of the plate interface. For example, along the Kamchatka subduction zone, these earthquakes occur primarily along the transition between strong and weak coupling, with significant heterogeneity in the pattern of moment scaled duration with respect to the coupling estimates. The longest scaled duration event in this catalog occurred in a region of strong coupling. Earthquakes along the transition between strong and weakly coupled exhibited the most complexity in the source time functions. Use of small magnitude (0.5 earthquake spectra, with higher corner frequencies and higher mean apparent stress for earthquakes that occur in along the Osa Peninsula relative to the Nicoya Peninsula, mimicking the along-strike variations in

  15. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  16. Earth rotation parameter and variation during 2005–2010 solved with LAGEOS SLR data

    Directory of Open Access Journals (Sweden)

    Yi Shen

    2015-01-01

    Full Text Available Time series of Earth rotation parameters were estimated from range data measured by the satellite laser ranging technique to the Laser Geodynamics Satellites (LAGEOS-1/2 through 2005 to 2010 using the dynamic method. Compared with Earth orientation parameter (EOP C04, released by the International Earth Rotation and Reference Systems Service, the root mean square errors for the measured X and Y of polar motion (PM and length of day (LOD were 0.24 and 0.25 milliarcseconds (mas, and 0.068 milliseconds (ms, respectively. Compared with ILRSA EOP, the X and Y of PM and LOD were 0.27 and 0.30 mas, and 0.054 ms, respectively. The time series were analyzed using the wavelet transformation and least squares methods. Wavelet analysis showed obvious seasonal and interannual variations of LOD, and both annual and Chandler variations of PM; however, the annual variation could not be distinguished from the Chandler variation because the two frequencies were very close. The trends and periodic variations of LOD and PM were obtained in the least squares sense, and PM showed semi-annual, annual, and Chandler periods. Semi-annual, annual, and quasi-biennial cycles for LOD were also detected. The trend rates of PM in the X and Y directions were 3.17 and −1.60 mas per year, respectively, and the North Pole moved to 26.8°E relative to the crust during 2005–2010. The trend rate of the LOD change was 0.028 ms per year.

  17. Period Variations for the Cepheid VZ Cyg

    Science.gov (United States)

    Sirorattanakul, Krittanon; Engle, Scott; Pepper, Joshua; Wells, Mark; Laney, Clifton D.; Rodriguez, Joseph E.; Stassun, Keivan G.

    2017-12-01

    The Cepheid Period-Luminosity law is a key rung on the extragalactic distance ladder. However, numerous Cepheids are known to undergo period variations. Monitoring, refining, and understanding these period variations allows us to better determine the parameters of the Cepheids themselves and of the instability strip in which they reside, and to test models of stellar evolution. VZ Cyg, a classical Cepheid pulsating at ˜4.864 days, has been observed for over 100 years. Combining data from literature observations, the Kilodegree Extremely Little Telescope (KELT) transit survey, and new targeted observations with the Robotically Controlled Telescope (RCT) at Kitt Peak, we find a period change rate of dP/dt = -0.0642 ± 0.0018 s yr-1. However, when only the recent observations are examined, we find a much higher period change rate of dP/dt = -0.0923 ± 0.0110 s yr-1. This higher rate could be due to an apparent long-term (P ≈ 26.5 years) cyclic period variation. The possible interpretations of this single Cepheid’s complex period variations underscore both the need to regularly monitor pulsating variables and the important benefits that photometric surveys such as KELT can have on the field. Further monitoring of this interesting example of Cepheid variability is recommended to confirm and better understand the possible cyclic period variations. Further, Cepheid timing analyses are necessary to fully understand their current behaviors and parameters, as well as their evolutionary histories.

  18. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  19. Rotating Hayward’s regular black hole as particle accelerator

    International Nuclear Information System (INIS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-01-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.

  20. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  1. Variation of physicochemical parameters during a composting process

    International Nuclear Information System (INIS)

    Faria C, D.M.; Ballesteros, M.I.; Bendeck, M.

    1999-01-01

    Two composting processes were carried out; they lasted for about 165 days. In one of the processes the decomposition of the material was performed only by microorganisms only (direct composting) and in the other one, by microorganisms and earthworms -Eisenla foetida- (indirect composting). The first one was carried out in a composting system called c amas a nd the indirect one was carried out in its initial phase in a system of p anelas , then the wastes were transferred to a c ama . The materials were treated in both processes with lime, ammonium nitrate and microorganisms. Periodical samples were taken from different places of the pile and a temperature control was made weekly. The following physicochemical parameters were analyzed in each sample: Humidity, color, pH soil : water in ratios of 1:5 and 1:10, ash, organic matter, CIC, contents of carbon and nitrogen and C/N ratio. In the aqueous extract, C/N ratio and percentage of hydro solubles were analyzed. It was also made a germination assay taking measurements of the percentage of garden cress seeds (Lepidium sativum) that germinated in the aqueous extract. The parameters variation in each process let us to establish that the greatest changes in the material happened in the initial phases of the process (thermophilic and mesophilic phases); the presence of microorganisms was the limiting factor in the dynamic of the process; on the other hand, the earthworm addition did not accelerate the mineralization of organic matter. The results let us to establish that the color determination is not an effective parameter in order to evaluate the degree of maturity of the compost. Other parameters such as temperature and germination percentage can be made as routine test to determine the process rate. Determination of CIC, ash and hydro solubles content are recommended to evaluate the optimal maturity degree in the material. It is proposed changes such as to reduce the composting time to a maximum of 100 days and to

  2. New trends in parameter identification for mathematical models

    CERN Document Server

    Leitão, Antonio; Zubelli, Jorge

    2018-01-01

    The Proceedings volume contains 16 contributions to the IMPA conference “New Trends in Parameter Identification for Mathematical Models”, Rio de Janeiro, Oct 30 – Nov 3, 2017, integrating the “Chemnitz Symposium on Inverse Problems on Tour”.  This conference is part of the “Thematic Program on Parameter Identification in Mathematical Models” organized  at IMPA in October and November 2017. One goal is to foster the scientific collaboration between mathematicians and engineers from the Brazialian, European and Asian communities. Main topics are iterative and variational regularization methods in Hilbert and Banach spaces for the stable approximate solution of ill-posed inverse problems, novel methods for parameter identification in partial differential equations, problems of tomography ,  solution of coupled conduction-radiation problems at high temperatures, and the statistical solution of inverse problems with applications in physics.

  3. Regular extra curricular sports practice does not prevent moderate or severe variations in self-esteem or trait anxiety in early adolescents.

    Science.gov (United States)

    Binsinger, Caroline; Laure, Patrick; Ambard, Marie-France

    2006-01-01

    trait anxiety among young adolescent.This activity seems to protect girls from severe variations of self-esteem.Boys do not seem to be protected from moderate or severe variations, neither of self-esteem, nor of trait anxiety, by a regular extracurricular sport practice.

  4. Variations and Regularities in the Hemispheric Distributions in Sunspot Groups of Various Classes

    Science.gov (United States)

    Gao, Peng-Xin

    2018-05-01

    The present study investigates the variations and regularities in the distributions in sunspot groups (SGs) of various classes in the northern and southern hemispheres from Solar Cycles (SCs) 12 to 23. Here, we use the separation scheme that was introduced by Gao, Li, and Li ( Solar Phys. 292, 124, 2017), which is based on A/U ( A is the corrected area of the SG, and U is the corrected umbral area of the SG), in order to separate SGs into simple SGs (A/U ≤ 4.5) and complex SGs (A/U > 6.2). The time series of Greenwich photoheliographic results from 1875 to 1976 (corresponding to complete SCs 12 - 20) and Debrecen photoheliographic data during the period 1974 - 2015 (corresponding to complete SCs 21 - 23) are used to show the distributions of simple and complex SGs in the northern and southern hemispheres. The main results we obtain are reported as follows: i) the larger of the maximum annual simple SG numbers in the two hemispheres and the larger of the maximum annual complex SG numbers in the two hemispheres occur in different hemispheres during SCs 12, 14, 18, and 19; ii) the relative changing trends of two curves - cumulative SG numbers in the northern and southern hemispheres - for simple SGs are different from those for complex SGs during SCs 12, 14, 18, and 21; and iii) there are discrepancies between the dominant hemispheres of simple and complex SGs for SCs 12, 14, 18, and 21.

  5. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  6. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  7. Nonlinear Variation of Parameters Formula for Impulsive Differential Equations with Initial Time Difference and Application

    Directory of Open Access Journals (Sweden)

    Peiguang Wang

    2014-01-01

    Full Text Available This paper establishes variation of parameters formula for impulsive differential equations with initial time difference. As an application, one of the results is used to investigate stability properties of solutions.

  8. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  9. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  10. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  11. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    Science.gov (United States)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  12. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  13. Change regularity of water quality parameters in leakage flow conditions and their relationship with iron release.

    Science.gov (United States)

    Liu, Jingqing; Shentu, Huabin; Chen, Huanyu; Ye, Ping; Xu, Bing; Zhang, Yifu; Bastani, Hamid; Peng, Hongxi; Chen, Lei; Zhang, Tuqiao

    2017-11-01

    The long-term stagnation in metal water supply pipes, usually caused by intermittent consumption patterns, will cause significant iron release and water quality deterioration, especially at the terminus of pipelines. Another common phenomenon at the terminus of pipelines is leakage, which is considered helpful by allowing seepage of low-quality drinking water resulting from long-term stagnation. In this study, the effect of laminar flow on alleviating water quality deterioration under different leakage conditions was investigated, and the potential thresholds of the flow rate, which can affect the iron release process, were discussed. Based on a galvanized pipe and ductile cast iron pipe pilot platform, which was established at the terminus of pipelines, this research was carried out by setting a series of leakage rate gradients to analyze the influence of different leakage flow rates on iron release, as well as the relationship with chemical and biological parameters. The results showed that the water quality parameters were obviously influenced by the change in flow velocity. Water quality was gradually improved with an increase in flow velocity, but its change regularity reflected a diversity under different flow rates (p water distribution system, when the bulk water was at the critical laminar flow velocity, the concentration of total iron, the quantity and rate of total iron release remain relatively in an ideal and safe situation. Copyright © 2017. Published by Elsevier Ltd.

  14. Seismo-Geochemical Variations in SW Taiwan: Multi-Parameter Automatic Gas Monitoring Results

    Science.gov (United States)

    Yang, T. F.; Fu, C.-C.; Walia, V.; Chen, C.-H.; Chyi, L. L.; Liu, T.-K.; Song, S.-R.; Lee, M.; Lin, C.-W.; Lin, C.-C.

    2006-04-01

    Gas variations of many mud volcanoes and hot springs distributed along the tectonic sutures in southwestern Taiwan are considered to be sensitive to the earthquake activity. Therefore, a multi-parameter automatic gas station was built on the bank of one of the largest mud-pools at an active fault zone of southwestern Taiwan, for continuous monitoring of CO2, CH4, N2 and H2O, the major constituents of its bubbling gases. During the year round monitoring from October 2001 to October 2002, the gas composition, especially, CH4 and CO2, of the mud pool showed significant variations. Taking the CO2/CH4 ratio as the main indicator, anomalous variations can be recognized from a few days to a few weeks before earthquakes and correlated well with those with a local magnitude >4.0 and local intensities >2. It is concluded that the gas composition in the area is sensitive to the local crustal stress/strain and is worthy to conduct real-time monitoring for the seismo-geochemical precursors.

  15. Quantitative Evaluation of Temporal Regularizers in Compressed Sensing Dynamic Contrast Enhanced MRI of the Breast

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2017-01-01

    Full Text Available Purpose. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI is used in cancer imaging to probe tumor vascular properties. Compressed sensing (CS theory makes it possible to recover MR images from randomly undersampled k-space data using nonlinear recovery schemes. The purpose of this paper is to quantitatively evaluate common temporal sparsity-promoting regularizers for CS DCE-MRI of the breast. Methods. We considered five ubiquitous temporal regularizers on 4.5x retrospectively undersampled Cartesian in vivo breast DCE-MRI data: Fourier transform (FT, Haar wavelet transform (WT, total variation (TV, second-order total generalized variation (TGVα2, and nuclear norm (NN. We measured the signal-to-error ratio (SER of the reconstructed images, the error in tumor mean, and concordance correlation coefficients (CCCs of the derived pharmacokinetic parameters Ktrans (volume transfer constant and ve (extravascular-extracellular volume fraction across a population of random sampling schemes. Results. NN produced the lowest image error (SER: 29.1, while TV/TGVα2 produced the most accurate Ktrans (CCC: 0.974/0.974 and ve (CCC: 0.916/0.917. WT produced the highest image error (SER: 21.8, while FT produced the least accurate Ktrans (CCC: 0.842 and ve (CCC: 0.799. Conclusion. TV/TGVα2 should be used as temporal constraints for CS DCE-MRI of the breast.

  16. Acquiring variation in an artificial language: Children and adults are sensitive to socially conditioned linguistic variation.

    Science.gov (United States)

    Samara, Anna; Smith, Kenny; Brown, Helen; Wonnacott, Elizabeth

    2017-05-01

    Languages exhibit sociolinguistic variation, such that adult native speakers condition the usage of linguistic variants on social context, gender, and ethnicity, among other cues. While the existence of this kind of socially conditioned variation is well-established, less is known about how it is acquired. Studies of naturalistic language use by children provide various examples where children's production of sociolinguistic variants appears to be conditioned on similar factors to adults' production, but it is difficult to determine whether this reflects knowledge of sociolinguistic conditioning or systematic differences in the input to children from different social groups. Furthermore, artificial language learning experiments have shown that children have a tendency to eliminate variation, a process which could potentially work against their acquisition of sociolinguistic variation. The current study used a semi-artificial language learning paradigm to investigate learning of the sociolinguistic cue of speaker identity in 6-year-olds and adults. Participants were trained and tested on an artificial language where nouns were obligatorily followed by one of two meaningless particles and were produced by one of two speakers (one male, one female). Particle usage was conditioned deterministically on speaker identity (Experiment 1), probabilistically (Experiment 2), or not at all (Experiment 3). Participants were given tests of production and comprehension. In Experiments 1 and 2, both children and adults successfully acquired the speaker identity cue, although the effect was stronger for adults and in Experiment 1. In addition, in all three experiments, there was evidence of regularization in participants' productions, although the type of regularization differed with age: children showed regularization by boosting the frequency of one particle at the expense of the other, while adults regularized by conditioning particle usage on lexical items. Overall, results

  17. SU-F-R-32: Evaluation of MRI Acquisition Parameter Variations On Texture Feature Extraction Using ACR Phantom

    International Nuclear Information System (INIS)

    Xie, Y; Wang, J; Wang, C; Chang, Z

    2016-01-01

    Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were kept as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.

  18. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  19. The variation of health effects based on the scenarios considering release parameters and meteorological data

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Ha, Jae Joo

    2000-01-01

    The variation of health effects resulting from the severe accidents of the YGN 3 and 4 nuclear power plants was examined based on scenarios considering the release parameters and meteorological data. The release parameters and meteorological data considered in making basic scenarios are release height, heat content, release time, warning time, wind speed, rainfall rate, and atmospheric stability class. The seasonal scenarios were also made in order to estimate the seasonal variation of health effects by considering seasonal characteristics of Korea. According to the results, there are large differences in consequence analysis from scenario to although an equal amount of radioactive materials is released to the atmosphere. Also, there are large differences in health effects from season to season due to distinct seasonal characteristics of Korea. Therefore, it is necessary to consider seasonal characteristics in developing optimum emergency response strategies

  20. Capped Lp approximations for the composite L0 regularization problem

    OpenAIRE

    Li, Qia; Zhang, Na

    2017-01-01

    The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...

  1. Nycthemeral variations of 99Tcsup(m)-labelled heparin pharmacokinetic parameters

    International Nuclear Information System (INIS)

    Decousus, M.; Gremillet, E.; Decousus, H.; Champailler, A.; Houzard, C.; Perpoint, B.; Jaubert, J.

    1985-01-01

    Six healthy volunteers received four i.v.boluses of 99 Tcsup(m)-heparin at 8.00, 14.00, 20.00 and 02.00 hours at seven-day intervals. Nine blood samples were taken covering a period of 2 h after administration. Simultaneously urine was collected and diuresis not noted. Plasma and urinary radioactivity were measured and standard pharmacokinetic parameters were calculated. Nycthemeral variations of these kinetic parameters were detected by means of distribution-free tests. Circadian rhythms were analysed by means of the cosinor method and the Gauss-Marquardt method. The mean raw value of the following parameters: apparent volume of distribution, plasmatic clearance and extra-renal metabolic clearance, increased significantly between 8.00 and 14.00 and decreased between 14.00 and 20.00. A circadian rhythm was found for the plasmatic clearance only. On the other hand the elimination half-lives and the renal clearance were unaffected by the time of the injections. These results obtained for low doses of 99 Tcsup(m)-heparin suggest a circadian rhythm of the bio-availability of heparin in man. This fact should be taken into account for the use of 99 Tcsup(m)-heparin in the diagnosis of deep-vein thrombosis and for the safe adjustment of the heparin dosages in the treatment of severe thromboembolism. (author)

  2. Seasonal and sex-specific variations in haematological parameters in 4 to 5.5-month-old infants in Guinea-Bissau, West Africa

    DEFF Research Database (Denmark)

    Bæk, Ole; Jensen, Kristoffer Jarlov; Andersen, Andreas

    2017-01-01

    were wider and generally higher than those from a US population of comparable age, but neutrophil levels were notably lower in Guinea-Bissau. Conclusions: The study indicated that eosinophil and platelet counts of infants were subject to seasonal variations. The reference ranges for haematological...... values were comparable to other African populations and corroborated that neutropenia regularly occurs in African infants....

  3. Combined sphere-spheroid particle model for the retrieval of the microphysical aerosol parameters via regularized inversion of lidar data

    Science.gov (United States)

    Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina

    2016-06-01

    In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.

  4. Nictemeral Variation of Physical Chemical and Biological Parameters of Ribeirão das Cruzes, Araraquara-SP

    Directory of Open Access Journals (Sweden)

    Vitor Rocha Santos

    2015-12-01

    Full Text Available Improper use of water, its degradation and irregular distribution can affect the quantity and quality needed for future generations, as well as create conflicts of interest between the industrial, urban and agricultural segments. In this context, it is of great importance the realization of studies on the quality of the hydric resources based on the analysis of temporal variation of limnological parameters. This study was conducted in the sub-basin of Ribeirão das Cruzes, which contributes to the water supply of the city of Araraquara (SP around 30% of all water captured and offered to the population. The objective of this research was to compare the water quality of river upstream and downstream of effluent discharge from a local treatment station in a 24 hour period (diurnal cycle variation. Data collection, comprising the period of one day, was done in order to observe the dynamics of operation and range of variation of the ecological processes in the studied system. The parameters analyzed showed significant variations in the sections of the upstream and downstream from the effluent discharge. With the nictemeral analysis it is evident the influence of effluents on the the waters of Ribeirão das Cruzes, especially during certain periods of the day.

  5. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  6. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  7. New method for minimizing regular functions with constraints on parameter region

    International Nuclear Information System (INIS)

    Kurbatov, V.S.; Silin, I.N.

    1993-01-01

    The new method of function minimization is developed. Its main features are considered. It is possible minimization of regular function with the arbitrary structure. For χ 2 -like function the usage of simplified second derivatives is possible with the control of correctness. The constraints of arbitrary structure can be used. The means for fast movement along multidimensional valleys are used. The method is tested on real data of K π2 decay of the experiment on rare K - -decays. 6 refs

  8. "Plug-and-play" edge-preserving regularization

    DEFF Research Database (Denmark)

    Chen, Donghui; Kilmer, Misha E.; Hansen, Per Christian

    2014-01-01

    In many inverse problems it is essential to use regularization methods that preserve edges in the reconstructions, and many reconstruction models have been developed for this task, such as the Total Variation (TV) approach. The associated algorithms are complex and require a good knowledge of large...... cosine transform.hence the term "plug-and-play" . We do not attempt to improve on TV reconstructions, but rather provide an easy-to-use approach to computing reconstructions with similar properties....

  9. Study of gain variation as a function of physical parameters of GEM foil

    CERN Document Server

    Das, Supriya

    2015-01-01

    The ALICE experiment at LHC has planned to upgrade the TPC by replacing the MWPC with GEM based detecting elements to restrict the IBF to a tolerable value. However the variation of the gain as a function of physical parameters of industrially produced large size GEM foils is needed to be studied as a part of the QA procedure for the detector. The size of the electron avalanche and consequently the gain for GEM based detectors depend on the electric field distribution inside the holes. Geometry of a hole plays an important role in defining the electric field inside it. In this work we have studied the variation of the gain as a function of the hole diameters using Garfield++ simulation package.

  10. Stark widths regularities within spectral series of sodium isoelectronic sequence

    Science.gov (United States)

    Trklja, Nora; Tapalaga, Irinel; Dojčinović, Ivan P.; Purić, Jagoš

    2018-02-01

    Stark widths within spectral series of sodium isoelectronic sequence have been studied. This is a unique approach that includes both neutrals and ions. Two levels of problem are considered: if the required atomic parameters are known, Stark widths can be calculated by some of the known methods (in present paper modified semiempirical formula has been used), but if there is a lack of parameters, regularities enable determination of Stark broadening data. In the framework of regularity research, Stark broadening dependence on environmental conditions and certain atomic parameters has been investigated. The aim of this work is to give a simple model, with minimum of required parameters, which can be used for calculation of Stark broadening data for any chosen transitions within sodium like emitters. Obtained relations were used for predictions of Stark widths for transitions that have not been measured or calculated yet. This system enables fast data processing by using of proposed theoretical model and it provides quality control and verification of obtained results.

  11. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    Science.gov (United States)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  12. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  13. Thin-shell wormholes from the regular Hayward black hole

    Energy Technology Data Exchange (ETDEWEB)

    Halilsoy, M.; Ovgun, A.; Mazharimousavi, S.H. [Eastern Mediterranean University, Department of Physics, Mersin 10 (Turkey)

    2014-03-15

    We revisit the regular black hole found by Hayward in 4-dimensional static, spherically symmetric spacetime. To find a possible source for such a spacetime we resort to the nonlinear electrodynamics in general relativity. It is found that a magnetic field within this context gives rise to the regular Hayward black hole. By employing such a regular black hole we construct a thin-shell wormhole for the case of various equations of state on the shell. We abbreviate a general equation of state by p = ψ(σ) where p is the surface pressure which is a function of the mass density (σ). In particular, linear, logarithmic, Chaplygin, etc. forms of equations of state are considered. In each case we study the stability of the thin shell against linear perturbations.We plot the stability regions by tuning the parameters of the theory. It is observed that the role of the Hayward parameter is to make the TSW more stable. Perturbations of the throat with small velocity condition are also studied. The matter of our TSWs, however, remains exotic. (orig.)

  14. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti

    2011-01-01

    operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a

  15. Uncertainty Quantification and Global Sensitivity Analysis of Subsurface Flow Parameters to Gravimetric Variations During Pumping Tests in Unconfined Aquifers

    Science.gov (United States)

    Maina, Fadji Zaouna; Guadagnini, Alberto

    2018-01-01

    We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic

  16. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  17. Diffusion tensor imaging of the human calf : Variation of inter- and intramuscle-specific diffusion parameters

    NARCIS (Netherlands)

    Schlaffke, Lara; Rehmann, Robert; Froeling, Martijn; Kley, Rudolf; Tegenthoff, Martin; Vorgerd, Matthias; Schmidt-Wilcke, Tobias

    2017-01-01

    Purpose: To investigate to what extent inter- and intramuscular variations of diffusion parameters of human calf muscles can be explained by age, gender, muscle location, and body mass index (BMI) in a specific age group (20-35 years). Materials and Methods: Whole calf muscles of 18 healthy

  18. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti

    2012-01-16

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  19. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti; Maass, Peter

    2012-01-01

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  20. Impact of seasonal variation, age and smoking status on human semen parameters: The Massachusetts General Hospital experience

    Science.gov (United States)

    Chen, Zuying; Godfrey-Bailey, Linda; Schiff, Isaac; Hauser, Russ

    2004-01-01

    Background To investigate the relationship of human semen parameters with season, age and smoking status. Methods The present study used data from subjects recruited into an ongoing cross-sectional study on the relationship between environmental agents and semen characteristics. Our population consisted of 306 patients who presented to the Vincent Memorial Andrology Laboratory of Massachusetts General Hospital for semen evaluation. Sperm concentration and motility were measured with computer aided sperm analysis (CASA). Sperm morphology was scored using Tygerberg Kruger strict criteria. Regression analyses were used to investigate the relationships between semen parameters and season, age and smoking status, adjusting for abstinence interval. Results Sperm concentration in the spring was significantly higher than in winter, fall and summer (p seasons. There were no statistically significant relationships between semen parameters and smoking status, though current smokers tended to have lower sperm concentration. We also did not find a statistically significant relationship between age and semen parameters. Conclusions We found seasonal variations in sperm concentration and suggestive evidence of seasonal variation in sperm motility and percent sperm with normal morphology. Although smoking status was not a significant predictor of semen parameters, this may have been due to the small number of current smokers in the study. PMID:15507127

  1. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    Science.gov (United States)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  2. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  3. Further investigation on "A multiplicative regularization for force reconstruction"

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  4. Revealing the physical insight of a length-scale parameter in metamaterials by exploiting the variational formulation

    Science.gov (United States)

    Abali, B. Emek

    2018-04-01

    For micro-architectured materials with a substructure, called metamaterials, we can realize a direct numerical simulation in the microscale by using classical mechanics. This method is accurate, however, computationally costly. Instead, a solution of the same problem in the macroscale is possible by means of the generalized mechanics. In this case, no detailed modeling of the substructure is necessary; however, new parameters emerge. A physical interpretation of these metamaterial parameters is challenging leading to a lack of experimental strategies for their determination. In this work, we exploit the variational formulation based on action principles and obtain a direct relation between a parameter used in the kinetic energy and a metamaterial parameter in the case of a viscoelastic model.

  5. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  6. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  7. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  8. Multi-subject atlas-based auto-segmentation reduces interobserver variation and improves dosimetric parameter consistency for organs at risk in nasopharyngeal carcinoma: A multi-institution clinical study

    International Nuclear Information System (INIS)

    Tao, Chang-Juan; Yi, Jun-Lin; Chen, Nian-Yong; Ren, Wei; Cheng, Jason; Tung, Stewart; Kong, Lin; Lin, Shao-Jun; Pan, Jian-Ji; Zhang, Guang-Shun; Hu, Jiang; Qi, Zhen-Yu; Ma, Jun; Lu, Jia-De; Yan, Di; Sun, Ying

    2015-01-01

    Background and purpose: To assess whether consensus guideline-based atlas-based auto-segmentation (ABAS) reduces interobserver variation and improves dosimetric parameter consistency for organs at risk (OARs) in nasopharyngeal carcinoma (NPC). Materials and methods: Eight radiation oncologists from 8 institutes contoured 20 OARs on planning CT images of 16 patients via manual contouring and manually-edited ABAS contouring. Interobserver variation [volume coefficient of variation (CV), Dice similarity coefficient (DSC), three-dimensional isocenter difference (3D-ICD)] and dosimetric parameters were compared between the two methods of contouring for each OAR. Results: Interobserver variation was significant for all OARs in manual contouring, resulting in significant dosimetric parameter variation (P < 0.05). Edited ABAS significantly improved multiple metrics and reduced dosimetric parameter variation for most OARs; brainstem, spinal cord, cochleae, temporomandibular joint (TMJ), larynx and pharyngeal constrictor muscle (PCM) obtained most benefit (range of mean DSC, volume CV and main ICD values was 0.36–0.83, 12.1–84.3%, 2.2–5.0 mm for manual contouring and 0.42–0.86, 7.2–70.6%, 1.2–3.5 mm for edited ABAS contouring, respectively; range of dose CV reduction: 1.0–3.0%). Conclusion: Substantial objective interobserver differences occur during manual contouring, resulting in significant dosimetric parameter variation. Edited ABAS reduced interobserver variation and improved dosimetric parameter consistency, particularly for brainstem, spinal cord, cochleae, TMJ, larynx and PCM

  9. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  10. Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study

    Directory of Open Access Journals (Sweden)

    Carlos Figuera

    2016-10-01

    Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero

  11. Nonlinear variational inequalities of semilinear parabolic type

    Directory of Open Access Journals (Sweden)

    Park Jong-Yeoul

    2001-01-01

    Full Text Available The existence of solutions for the nonlinear functional differential equation governed by the variational inequality is studied. The regularity and a variation of solutions of the equation are also given.

  12. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    Science.gov (United States)

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  13. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  14. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2015-01-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  15. Blind image fusion for hyperspectral imaging with the directional total variation

    Science.gov (United States)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  16. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  17. Application of Tikhonov regularization method to wind retrieval from scatterometer data II: cyclone wind retrieval with consideration of rain

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Fei Jian-Fang; Du Hua-Dong; Zhang Liang

    2011-01-01

    According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called GMF+Rain). The GMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. Geostatistical analysis of space variation in underground water various quality parameters in Kłodzko water intake area (SW part of Poland

    Directory of Open Access Journals (Sweden)

    Namysłowska-Wilczyńska Barbara

    2016-09-01

    Full Text Available This paper presents selected results of research connected with the development of a (3D geostatistical hydrogeochemical model of the Kłodzko Drainage Basin, dedicated to the spatial variation in the different quality parameters of underground water in the water intake area (SW part of Poland. The research covers the period 2011-2012. Spatial analyses of the variation in various quality parameters, i.e., contents of: iron, manganese, ammonium ion, nitrate ion, phosphate ion, total organic carbon, pH redox potential and temperature, were carried out on the basis of the chemical determinations of the quality parameters of underground water samples taken from the wells in the water intake area. Spatial variation in the parameters was analyzed on the basis of data obtained (November 2011 from tests of water taken from 14 existing wells with a depth ranging from 9.5 to 38.0 m b.g.l. The latest data (January 2012 were obtained (gained from 3 new piezometers, made in other locations in the relevant area. A depth of these piezometers amounts to 9-10 m.

  19. Impacts of meteorological parameters and emissions on decadal, interannual, and seasonal variations of atmospheric black carbon in the Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Yu-Hao Mao

    2016-09-01

    Full Text Available We quantified the impacts of variations in meteorological parameters and emissions on decadal, interannual, and seasonal variations of atmospheric black carbon (BC in the Tibetan Plateau for 1980–2010 using a global 3-dimensional chemical transport model driven by the Modern Era Retrospective-analysis for Research and Applications (MERRA meteorological fields. From 1980 to 2010, simulated surface BC concentrations and all-sky direct radiative forcing at the top of the atmosphere due to atmospheric BC increased by 0.15 μg m−3 (63% and by 0.23 W m−2 (62%, respectively, averaged over the Tibetan Plateau (75–105°E, 25–40°N. Simulated annual mean surface BC concentrations were in the range of 0.24–0.40 μg m−3 averaged over the plateau for 1980–2010, with the decadal trends of 0.13 μg m−3 per decade in the 1980s and 0.08 in the 2000s. The interannual variations were −5.4% to 7.0% for deviation from the mean, 0.0062 μg m−3 for mean absolute deviation, and 2.5% for absolute percent departure from the mean. Model sensitivity simulations indicated that the decadal trends of surface BC concentrations were mainly driven by changes in emissions, while the interannual variations were dependent on variations of both meteorological parameters and emissions. Meteorological parameters played a crucial role in driving the interannual variations of BC especially in the monsoon season.

  20. Traveling waves of the regularized short pulse equation

    International Nuclear Information System (INIS)

    Shen, Y; Horikis, T P; Kevrekidis, P G; Frantzeskakis, D J

    2014-01-01

    The properties of the so-called regularized short pulse equation (RSPE) are explored with a particular focus on the traveling wave solutions of this model. We theoretically analyze and numerically evolve two sets of such solutions. First, using a fixed point iteration scheme, we numerically integrate the equation to find solitary waves. It is found that these solutions are well approximated by a finite sum of hyperbolic secants powers. The dependence of the soliton's parameters (height, width, etc) to the parameters of the equation is also investigated. Second, by developing a multiple scale reduction of the RSPE to the nonlinear Schrödinger equation, we are able to construct (both standing and traveling) envelope wave breather type solutions of the former, based on the solitary wave structures of the latter. Both the regular and the breathing traveling wave solutions identified are found to be robust and should thus be amenable to observations in the form of few optical cycle pulses. (paper)

  1. Vibration control of an MR vehicle suspension system considering both hysteretic behavior and parameter variation

    International Nuclear Information System (INIS)

    Choi, Seung-Bok; Seong, Min-Sang; Ha, Sung-Hoon

    2009-01-01

    This paper presents vibration control responses of a controllable magnetorheological (MR) suspension system considering the two most important characteristics of the system; the field-dependent hysteretic behavior of the MR damper and the parameter variation of the suspension. In order to achieve this goal, a cylindrical MR damper which is applicable to a middle-sized passenger car is designed and manufactured. After verifying the damping force controllability, the field-dependent hysteretic behavior of the MR damper is identified using the Preisach hysteresis model. The full-vehicle suspension model is then derived by considering vertical, pitch and roll motions. An H ∞ controller is designed by treating the sprung mass of the vehicle as a parameter variation and integrating it with the hysteretic compensator which produces additional control input. In order to demonstrate the effectiveness and robustness of the proposed control system, the hardware-in-the-loop simulation (HILS) methodology is adopted by integrating the suspension model with the proposed MR damper. Vibration control responses of the vehicle suspension system such as vertical acceleration are evaluated under both bump and random road conditions

  2. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    Science.gov (United States)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  3. Diffusion tensor imaging of the human calf: Variation of inter- and intramuscle-specific diffusion parameters.

    Science.gov (United States)

    Schlaffke, Lara; Rehmann, Robert; Froeling, Martijn; Kley, Rudolf; Tegenthoff, Martin; Vorgerd, Matthias; Schmidt-Wilcke, Tobias

    2017-10-01

    To investigate to what extent inter- and intramuscular variations of diffusion parameters of human calf muscles can be explained by age, gender, muscle location, and body mass index (BMI) in a specific age group (20-35 years). Whole calf muscles of 18 healthy volunteers were evaluated. Magnetic resonance imaging (MRI) was performed using a 3T scanner and a 16-channel Torso XL coil. Diffusion-weighted images were acquired to perform fiber tractography and diffusion tensor imaging (DTI) analysis for each muscle of both legs. Fiber tractography was used to separate seven lower leg muscles. Associations between DTI parameters and confounds were evaluated. All muscles were additionally separated in seven identical segments along the z-axis to evaluate intramuscular differences in diffusion parameters. Fractional anisotropy (FA) and mean diffusivity (MD) were obtained for each muscle with low standard deviations (SDs) (SD FA : 0.01-0.02; SD MD : 0.07-0.14(10 -3 )). We found significant differences in FA values of the tibialis anterior muscle (AT) and extensor digitorum longus (EDL) muscles between men and women for whole muscle FA (two-sample t-tests; AT: P = 0.0014; EDL: P = 0.0004). We showed significant intramuscular differences in diffusion parameters between adjacent segments in most calf muscles (P < 0.001). Whereas muscle insertions showed higher (SD 0.03-0.06) than muscle bellies (SD 0.01-0.03), no relationships between FA or MD with age or BMI were found. Inter- and intramuscular variations in diffusion parameters of the calf were shown, which are not related to age or BMI in this age group. Differences between muscle belly and insertion should be considered when interpreting datasets not including whole muscles. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1137-1148. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    International Nuclear Information System (INIS)

    Schee, Jan; Stuchlík, Zdeněk

    2015-01-01

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghost direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region

  5. Chiral Thirring–Wess model with Faddeevian regularization

    International Nuclear Information System (INIS)

    Rahaman, Anisur

    2015-01-01

    Replacing vector type of interaction of the Thirring–Wess model by the chiral type a new model is presented which is termed here as chiral Thirring–Wess model. Ambiguity parameters of regularization are so chosen that the model falls into the Faddeevian class. The resulting Faddeevian class of model in general does not possess Lorentz invariance. However we can exploit the arbitrariness admissible in the ambiguity parameters to relate the quantum mechanically generated ambiguity parameters with the classical parameter involved in the masslike term of the gauge field which helps to maintain physical Lorentz invariance instead of the absence of manifestly Lorentz covariance of the model. The phase space structure and the theoretical spectrum of this class of model have been determined through Dirac’s method of quantization of constraint system

  6. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  7. Regular cell design approach considering lithography-induced process variations

    OpenAIRE

    Gómez Fernández, Sergio

    2014-01-01

    The deployment delays for EUVL, forces IC design to continue using 193nm wavelength lithography with innovative and costly techniques in order to faithfully print sub-wavelength features and combat lithography induced process variations. The effect of the lithography gap in current and upcoming technologies is to cause severe distortions due to optical diffraction in the printed patterns and thus degrading manufacturing yield. Therefore, a paradigm shift in layout design is mandatory towards ...

  8. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  9. Sensitivity analysis with respect to observations in variational data assimilation for parameter estimation

    Directory of Open Access Journals (Sweden)

    V. Shutyaev

    2018-06-01

    Full Text Available The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find unknown parameters of the model. The observation data, and hence the optimal solution, may contain uncertainties. A response function is considered as a functional of the optimal solution after assimilation. Based on the second-order adjoint techniques, the sensitivity of the response function to the observation data is studied. The gradient of the response function is related to the solution of a nonstandard problem involving the coupled system of direct and adjoint equations. The nonstandard problem is studied, based on the Hessian of the original cost function. An algorithm to compute the gradient of the response function with respect to observations is presented. A numerical example is given for the variational data assimilation problem related to sea surface temperature for the Baltic Sea thermodynamics model.

  10. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    Science.gov (United States)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  11. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  12. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  13. Daily variation of the radon concentration indoors and outdoors and the influence of meteorological parameters

    International Nuclear Information System (INIS)

    Porstendoerfer, J.; Butterweck, G.; Reineking, A.

    1994-01-01

    Series of continuous radon measurements in the open atmosphere and in a dwelling, including the parallel measurement of meteorological parameters, were performed over a period of several weeks. The radon concentration in indoor and outdoor air depends on meteorological conditions. In the open atmosphere the radon concentration varies between 1 and 100 Bq m -3 , depending on weather conditions and time of day. During time periods of low turbulent air exchange (high pressure weather with clear night sky), especially in the night and early morning hours (night inversion layer), the diurnal variation of the radon concentration showed a pronounced maximum. Cloudy and windy weather conditions yield a small diurnal variation of the radon concentration. Indoors, the average level and the diurnal variation of the indoor radon concentration is also influenced by meteorological conditions. The measurements are consistent with a dependence of indoor radon concentrations on indoor-outdoor pressure differences. 11 refs., 4 figs

  14. Long-period variations of wind parameters in the mesopause region and the solar cycle dependence

    International Nuclear Information System (INIS)

    Greisiger, K.M.; Schminder, R.; Kuerschner, D.

    1987-01-01

    A solar dependence of wind parameters below 100 km was found by Sprenger and Schminder on the basis of long-term continuous ionospheric drift measurements. For winter they obtained for the prevailing wind a positive correlation with solar activity and for the amplitude of the semi-diurnal tidal wind a negative correlation. However, after the years 1973-1974 we found a significant negative correlation with solar activity with an indication of a new change after 1983. We conclude that this long-term behaviour points rather to a climatic variation with an internal atmospheric cause than to a direct solar control. Recent satellite data of the solar u.v. radiation and the upper stratospheric ozone have shown that the possible variation of the thermal tidal excitation during the solar cycle amounts to only a few per cent. This is, therefore, insufficient to account for the 40-70% variation of the tidal amplitudes. Some other possibilities of explaining this result are discussed. (author)

  15. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  16. Analysis of the average daily radon variations in the soil air

    International Nuclear Information System (INIS)

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  17. Improved resolution and reliability in dynamic PET using Bayesian regularization of MRTM2

    DEFF Research Database (Denmark)

    Agn, Mikael; Svarer, Claus; Frokjaer, Vibe G.

    2014-01-01

    This paper presents a mathematical model that regularizes dynamic PET data by using a Bayesian framework. We base the model on the well known two-parameter multilinear reference tissue method MRTM2 and regularize on the assumption that spatially close regions have similar parameters. The developed...... model is compared to the conventional approach of improving the low signal-to-noise ratio of PET data, i.e., spatial filtering of each time frame independently by a Gaussian kernel. We show that the model handles high levels of noise better than the conventional approach, while at the same time...

  18. Experimental investigation on the effect of intake air temperature and air-fuel ratio on cycle-to-cycle variations of HCCI combustion and performance parameters

    Energy Technology Data Exchange (ETDEWEB)

    Maurya, Rakesh Kumar; Agarwal, Avinash Kumar [Engine Research Laboratory, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India)

    2011-04-15

    Combustion in HCCI engines is a controlled auto ignition of well-mixed fuel, air and residual gas. Since onset of HCCI combustion depends on the auto ignition of fuel/air mixture, there is no direct control on the start of combustion process. Therefore, HCCI combustion becomes unstable rather easily, especially at lower and higher engine loads. In this study, cycle-to-cycle variations of a HCCI combustion engine fuelled with ethanol were investigated on a modified two-cylinder engine. Port injection technique is used for preparing homogeneous charge for HCCI combustion. The experiments were conducted at varying intake air temperatures and air-fuel ratios at constant engine speed of 1500 rpm and P-{theta} diagram of 100 consecutive combustion cycles for each test conditions at steady state operation were recorded. Consequently, cycle-to-cycle variations of the main combustion parameters and performance parameters were analyzed. To evaluate the cycle-to-cycle variations of HCCI combustion parameters, coefficient of variation (COV) of every parameter were calculated for every engine operating condition. The critical optimum parameters that can be used to define HCCI operating ranges are 'maximum rate of pressure rise' and 'COV of indicated mean effective pressure (IMEP)'. (author)

  19. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data.

    Science.gov (United States)

    Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard

    2016-01-01

    To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable.

  20. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data

    Science.gov (United States)

    Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard

    2016-01-01

    To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable. PMID:26872333

  1. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data.

    Directory of Open Access Journals (Sweden)

    Chen Zhong

    Full Text Available To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more

  2. Classification of the coefficients of variation of parameters evaluated in Japanese quail experiments

    Directory of Open Access Journals (Sweden)

    DHV Leal

    2014-06-01

    Full Text Available The objective of this study was to design a classification range of the coefficients of variation (CV of traits used in experiments with eggtype Japanese quails (Coturnix coturnix japonica. The journal Revista Brasileira de Zootecnia was systematically reviewed, using the key word 'quail' during the period of January, 2000 to 2010. The CV of feed intake (g/bird/d, egg production (%/bird/d, egg weight (g, egg mass (g/bird/d, feed conversion ratio per dozen eggs (g/dozen, feed conversion ratio per egg mass (g/g, and egg specific gravity (g/mL were collected. For each parameter, CV were classified using the following median (MD and pseudo-sigma (PS ratio as follows: low (CV MD + 2PS. According to the results, it was concluded that each parameter has a specific classification range that should be taken into account when evaluating experimental precision.

  3. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort.

    Directory of Open Access Journals (Sweden)

    Carlotta Caramia

    Full Text Available The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking-in some specific conditions-might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users were instructed to text chat (with two different levels of cognitive load, call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use.

  4. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  5. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  6. Regularization destriping of remote sensing imagery

    Science.gov (United States)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  7. Variations of the bacterial foraging algorithm for the extraction of PV module parameters from nameplate data

    International Nuclear Information System (INIS)

    Awadallah, Mohamed A.

    2016-01-01

    Highlights: • Bacterial foraging algorithm is used to extract PV model parameters from nameplate data. • Five variations of the bacterial foraging algorithm are compared on a simple objective function. • Best results obtained when swarming is neglected, step size is varied, and global best is preserved. • The technique is successfully applied on single- and double-diode models. • Matching between computation and measurements validates the obtained set of parameters. - Abstract: The paper introduces the task of parameter extraction of photovoltaic (PV) modules as a nonlinear optimization problem. The concerned parameters are the series resistance, shunt resistance, diode ideality factor, and diode reverse saturation current for both the single- and double-diode models. An error function representing the mismatch between computed and targeted performance is minimized using different versions of the bacterial foraging (BF) algorithm of global search and heuristic optimization. The targeted performance is obtained from the nameplate data of the PV module. Five distinct variations of the BF algorithm are used to solve the problem independently for the single- and double-diode models. The best optimization results are obtained when swarming is eliminated, chemotactic step size is dynamically varied, and global best is preserved, all acting together. Under such conditions, the best global minimum of 0.0028 is reached in an average best time of 94.4 sec for the single-diode model. However, it takes an average of 153 sec to reach the best global minimum of 0.0021 in case of double-diode model. An experimental verification study involves the comparison of computed performance to measurements on an Eclipsall PV module. It is shown that all variants of the BF algorithm could reach equivalent-circuit parameters with accepted accuracy by solving the optimization problem. The good matching between analytical and experimental results indicates the effectiveness of the

  8. Restrictive metric regularity and generalized differential calculus in Banach spaces

    Directory of Open Access Journals (Sweden)

    Bingwu Wang

    2004-10-01

    Full Text Available We consider nonlinear mappings f:X→Y between Banach spaces and study the notion of restrictive metric regularity of f around some point x¯, that is, metric regularity of f from X into the metric space E=f(X. Some sufficient as well as necessary and sufficient conditions for restrictive metric regularity are obtained, which particularly include an extension of the classical Lyusternik-Graves theorem in the case when f is strictly differentiable at x¯ but its strict derivative ∇f(x¯ is not surjective. We develop applications of the results obtained and some other techniques in variational analysis to generalized differential calculus involving normal cones to nonsmooth and nonconvex sets, coderivatives of set-valued mappings, as well as first-order and second-order subdifferentials of extended real-valued functions.

  9. A Regularization SAA Scheme for a Stochastic Mathematical Program with Complementarity Constraints

    Directory of Open Access Journals (Sweden)

    Yu-xin Li

    2014-01-01

    Full Text Available To reflect uncertain data in practical problems, stochastic versions of the mathematical program with complementarity constraints (MPCC have drawn much attention in the recent literature. Our concern is the detailed analysis of convergence properties of a regularization sample average approximation (SAA method for solving a stochastic mathematical program with complementarity constraints (SMPCC. The analysis of this regularization method is carried out in three steps: First, the almost sure convergence of optimal solutions of the regularized SAA problem to that of the true problem is established by the notion of epiconvergence in variational analysis. Second, under MPCC-MFCQ, which is weaker than MPCC-LICQ, we show that any accumulation point of Karash-Kuhn-Tucker points of the regularized SAA problem is almost surely a kind of stationary point of SMPCC as the sample size tends to infinity. Finally, some numerical results are reported to show the efficiency of the method proposed.

  10. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  11. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  12. Variation of parasite load and immune parameters in two species of New Zealand shore crabs.

    Science.gov (United States)

    Dittmer, Jessica; Koehler, Anson V; Richard, Freddie-Jeanne; Poulin, Robert; Sicard, Mathieu

    2011-09-01

    While parasites are likely to encounter several potential intermediate hosts in natural communities, a parasite's actual range of compatible hosts is limited by numerous biological factors ranging from behaviour to immunology. In crustaceans, two major components of immunity are haemocytes and the prophenoloxidase system involved in the melanisation of foreign particles. Here, we analysed metazoan parasite prevalence and loads in the two sympatric crab species Hemigrapsus crenulatus and Macrophthalmus hirtipes at two sites. In parallel, we analysed the variation in haemocyte concentration and amount of circulating phenoloxidase (PO) in the haemolymph of the same individuals in an attempt to (a) explain differences in parasite prevalence and loads in the two species at two sites and (b) assess the impact of parasites on these immune parameters. M. hirtipes harboured more parasites but also exhibited higher haemocyte concentrations than H. crenulatus independent of the study site. Thus, higher investment in haemocyte production for M. hirtipes does not seem to result in higher resistance to parasites. Analyses of variation in immune parameters for the two crab species between the two sites that differed in parasite prevalence showed common trends. (a) In general, haemocyte concentrations were higher at the site experiencing higher parasitic pressure while circulating PO activity was lower and (b) haemocyte concentrations were influenced by microphallid trematode metacercariae in individuals from the site with higher parasitic pressure. We suggest that the higher haemocyte concentrations observed in both crab species exposed to higher parasitic pressure may represent an adaptive response to the impact of parasites on this immune parameter.

  13. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  14. Analysis of regularized inversion of data corrupted by white Gaussian noise

    International Nuclear Information System (INIS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-01-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)

  15. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  16. Study of variations of radon emanations from soil in Morocco using solid state nuclear track detectors. Correlations with atmospheric parameters and seismic activities

    International Nuclear Information System (INIS)

    Boukhal, H.

    1993-01-01

    This study investigates the quantity variations of radon emanating from soil in accordance with time. It aims to verify the possibility of the radon sign use in earthquake prediction. Regular measures of radon concentration in soil have been carried out over the two years 1991 and 1992 in five towns of Morocco: Rabat, Tetouan, Ifrane and Khouribga, and in geophysic observatory of Ibn Rochd (Berchid region). The measuring method is based on the solid state nuclear track detectors technique. The obtained results have shown an influence of the atmospheric effects on the radon emanation. The experiment proved that, on one hand, the variations of the aforesaid influence are correlated to variations of the pluviometry and the atmospheric temperature and, on the other hand, there is no notable effect of atmospheric pressure or atmospheric humidity. The good correlations between the different seismic activities and the variations of radon emanation rate in the five measurement stations, have shown the interest of radon use in the earthquake prediction field. 81 refs., 100 figs., 17 tabs.(F. M.)

  17. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  18. Sparse reconstruction by means of the standard Tikhonov regularization

    International Nuclear Information System (INIS)

    Lu Shuai; Pereverzev, Sergei V

    2008-01-01

    It is a common belief that Tikhonov scheme with || · ||L 2 -penalty fails in sparse reconstruction. We are going to show, however, that this standard regularization can help if the stability measured in L 1 -norm will be properly taken into account in the choice of the regularization parameter. The crucial point is that now a stability bound may depend on the bases with respect to which the solution of the problem is assumed to be sparse. We discuss how such a stability can be estimated numerically and present the results of computational experiments giving the evidence of the reliability of our approach.

  19. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  20. The patterning of retinal horizontal cells: normalizing the regularity index enhances the detection of genomic linkage

    Directory of Open Access Journals (Sweden)

    Patrick W. Keeley

    2014-10-01

    Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.

  1. Biological variation of platelet parameters determined by the Sysmex XN hematology analyzer.

    Science.gov (United States)

    Buoro, Sabrina; Seghezzi, Michela; Manenti, Barbara; Pacioni, Aurelio; Carobene, Anna; Ceriotti, Ferruccio; Ottomano, Cosimo; Lippi, Giuseppe

    2017-07-01

    This study was aimed to define the short- and medium-term biological variation (BV) estimates, the index of individuality and the reference change value (RCV) of platelet count, platelet distribution width, mean platelet volume, platelet larger cell ratio, plateletcrit and immature platelet fraction. The study population consisted of 43 health subjects, who participated to the assessment of medium-term (21 subjects; blood sampling once a week for 5 consecutive weeks) and short-term (22 subjects; blood sampling once a day for 5 consecutive days) BV study, using Sysmex XN-module. Eight subjects were also scheduled to participate to both phases. The data were subject to outlier analysis prior to CV-ANOVA, to determine the BV estimates with the relative confidence intervals. The medium-term and short-term within-subject BV (CV I ) was comprised between 2.3 and 7.0% and 1.1-8.6%, whereas the medium-term and short-term between-subjects BV (CV G ) was comprised between 7.1 and 20.7% and 6.8-48.6%. The index of individuality and index of heterogeneity were always respectively 0.63 for all the parameters, in both arms of the study. The RCVs were similar for all parameters, in both arms of the study. This study allowed to define the BV estimates of many platelet parameters, some of them unavailable in literature. The kinetics of platelet turnover suggests the use of short-term BV data for calculating analytical goals and RCV. The correct clinical interpretation of platelet parameters also necessitates that each laboratory estimates local RCV values. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing.

    Science.gov (United States)

    Elmoataz, Abderrahim; Lezoray, Olivier; Bougleux, Sébastien

    2008-07-01

    We introduce a nonlocal discrete regularization framework on weighted graphs of the arbitrary topologies for image and manifold processing. The approach considers the problem as a variational one, which consists of minimizing a weighted sum of two energy terms: a regularization one that uses a discrete weighted p-Dirichlet energy and an approximation one. This is the discrete analogue of recent continuous Euclidean nonlocal regularization functionals. The proposed formulation leads to a family of simple and fast nonlinear processing methods based on the weighted p-Laplace operator, parameterized by the degree p of regularity, the graph structure and the graph weight function. These discrete processing methods provide a graph-based version of recently proposed semi-local or nonlocal processing methods used in image and mesh processing, such as the bilateral filter, the TV digital filter or the nonlocal means filter. It works with equal ease on regular 2-D and 3-D images, manifolds or any data. We illustrate the abilities of the approach by applying it to various types of images, meshes, manifolds, and data represented as graphs.

  3. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  4. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  5. Variation of equation of state parameters in the Mg2(Si 1-xSnx) alloys

    KAUST Repository

    Pulikkotil, Jiji Thomas Joseph

    2010-08-03

    Thermoelectric performance peaks up for intermediate Mg2(Si 1-x:Snx) alloys, but not for isomorphic and isoelectronic Mg2(Si1-xGex) alloys. A comparative study of the equation of state parameters is performed using density functional theory, Green\\'s function technique, and the coherent potential approximation. Anomalous variation of the bulk modulus is found in Mg2(Si1-xSn x) but not in the Mg2(Si1-xGex) analogs. Assuming a Debye model, linear variations of the unit cell volume and pressure derivative of the bulk modulus suggest that lattice effects are important for the thermoelectric response. From the electronic structure perspective, Mg2(Si1-xSnx) is distinguished by a strong renormalization of the anion-anion hybridization. © 2010 IOP Publishing Ltd.

  6. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  7. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  8. Variation of Magnetic Field (By , Bz) Polarity and Statistical Analysis of Solar Wind Parameters during the Magnetic Storm Period

    OpenAIRE

    Ga-Hee Moon

    2011-01-01

    It is generally believed that the occurrence of a magnetic storm depends upon the solar wind conditions, particularly the southward interplanetary magnetic field (IMF) component. To understand the relationship between solar wind parameters and magnetic storms, variations in magnetic field polarity and solar wind parameters during magnetic storms are examined. A total of 156 storms during the period of 1997~2003 are used. According to the interplanetary driver, magnetic storms are ...

  9. Regular black holes in Einstein-Gauss-Bonnet gravity

    Science.gov (United States)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  10. Fried-Yennie gauge in dimensionally regularized QED

    International Nuclear Information System (INIS)

    Adkins, G.S.

    1993-01-01

    The Fried-Yennie gauge in QED is a covariant gauge with agreeable infrared properties. That is, the mass-shell renormalization scheme can be implemented without introducing artificial infrared divergences, and terms having spuriously low orders in α disappear in certain bound-state calculations. The photon propagator in the Fried-Yennie gauge has the form D β μν (k)=(-1/k 2 )[g μν +βk μ kν/k 2 ], where β is the gauge parameter. In this work, I show that the Fried-Yennie gauge parameter is β=2/(1-2ε) when dimensional regularization (with n=4-2ε dimensions of spacetime) is used to regulate the theory

  11. ANALYSIS OF THE INTRA-CITY VARIATION OF URBAN HEAT ISLAND AND ITS RELATION TO LAND SURFACE/COVER PARAMETERS

    Directory of Open Access Journals (Sweden)

    D. Gerçek

    2016-06-01

    Full Text Available Along with urbanization, sealing of vegetated land and evaporation surfaces by impermeable materials, lead to changes in urban climate. This phenomenon is observed as temperatures several degrees higher in densely urbanized areas compared to the rural land at the urban fringe particularly at nights, so-called Urban Heat Island. Urban Heat Island (UHI effect is related with urban form, pattern and building materials so far as it is associated with meteorological conditions, air pollution, excess heat from cooling. UHI effect has negative influences on human health, as well as other environmental problems such as higher energy demand, air pollution, and water shortage. Urban Heat Island (UHI effect has long been studied by observations of air temperature from thermometers. However, with the advent and proliferation of remote sensing technology, synoptic coverage and better representations of spatial variation of surface temperature became possible. This has opened new avenues for the observation capabilities and research of UHIs. In this study, "UHI effect and its relation to factors that cause it" is explored for İzmit city which has been subject to excess urbanization and industrialization during the past decades. Spatial distribution and variation of UHI effect in İzmit is analysed using Landsat 8 and ASTER day & night images of 2015 summer. Surface temperature data derived from thermal bands of the images were analysed for UHI effect. Higher temperatures were classified into 4 grades of UHIs and mapped both for day and night. Inadequate urban form, pattern, density, high buildings and paved surfaces at the expanse of soil ground and vegetation cover are the main factors that cause microclimates giving rise to spatial variations in temperatures across cities. These factors quantified as land surface/cover parameters for the study include vegetation index (NDVI, imperviousness (NDISI, albedo, solar insolation, Sky View Factor (SVF, building

  12. Analysis of the Intra-City Variation of Urban Heat Island and its Relation to Land Surface/cover Parameters

    Science.gov (United States)

    Gerçek, D.; Güven, İ. T.; Oktay, İ. Ç.

    2016-06-01

    Along with urbanization, sealing of vegetated land and evaporation surfaces by impermeable materials, lead to changes in urban climate. This phenomenon is observed as temperatures several degrees higher in densely urbanized areas compared to the rural land at the urban fringe particularly at nights, so-called Urban Heat Island. Urban Heat Island (UHI) effect is related with urban form, pattern and building materials so far as it is associated with meteorological conditions, air pollution, excess heat from cooling. UHI effect has negative influences on human health, as well as other environmental problems such as higher energy demand, air pollution, and water shortage. Urban Heat Island (UHI) effect has long been studied by observations of air temperature from thermometers. However, with the advent and proliferation of remote sensing technology, synoptic coverage and better representations of spatial variation of surface temperature became possible. This has opened new avenues for the observation capabilities and research of UHIs. In this study, "UHI effect and its relation to factors that cause it" is explored for İzmit city which has been subject to excess urbanization and industrialization during the past decades. Spatial distribution and variation of UHI effect in İzmit is analysed using Landsat 8 and ASTER day & night images of 2015 summer. Surface temperature data derived from thermal bands of the images were analysed for UHI effect. Higher temperatures were classified into 4 grades of UHIs and mapped both for day and night. Inadequate urban form, pattern, density, high buildings and paved surfaces at the expanse of soil ground and vegetation cover are the main factors that cause microclimates giving rise to spatial variations in temperatures across cities. These factors quantified as land surface/cover parameters for the study include vegetation index (NDVI), imperviousness (NDISI), albedo, solar insolation, Sky View Factor (SVF), building envelope

  13. Asymptotic performance of regularized quadratic discriminant analysis based classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-12-13

    This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.

  14. Variations of 57Fe hyperfine parameters in medicaments containing ferrous fumarate and ferrous sulfate

    International Nuclear Information System (INIS)

    Oshtrakh, M. I.; Novikov, E. G.; Dubiel, S. M.; Semionkin, V. A.

    2010-01-01

    Several commercially available medicaments containing ferrous fumarate (FeC 4 H 2 O 4 ) and ferrous sulfate (FeSO 4 ), as a source of ferrous iron, were studied using a high velocity resolution Mössbauer spectroscopy. A comparison of the 57 Fe hyperfine parameters revealed small variations for the main components in both medicaments indicating some differences in the ferrous fumarates and ferrous sulfates. It was also found that all spectra contained additional minor components probably related to ferrous and ferric impurities or to partially modified main components.

  15. Local regularity analysis of strata heterogeneities from sonic logs

    Directory of Open Access Journals (Sweden)

    S. Gaci

    2010-09-01

    Full Text Available Borehole logs provide geological information about the rocks crossed by the wells. Several properties of rocks can be interpreted in terms of lithology, type and quantity of the fluid filling the pores and fractures.

    Here, the logs are assumed to be nonhomogeneous Brownian motions (nhBms which are generalized fractional Brownian motions (fBms indexed by depth-dependent Hurst parameters H(z. Three techniques, the local wavelet approach (LWA, the average-local wavelet approach (ALWA, and Peltier Algorithm (PA, are suggested to estimate the Hurst functions (or the regularity profiles from the logs.

    First, two synthetic sonic logs with different parameters, shaped by the successive random additions (SRA algorithm, are used to demonstrate the potential of the proposed methods. The obtained Hurst functions are close to the theoretical Hurst functions. Besides, the transitions between the modeled layers are marked by Hurst values discontinuities. It is also shown that PA leads to the best Hurst value estimations.

    Second, we investigate the multifractional property of sonic logs data recorded at two scientific deep boreholes: the pilot hole VB and the ultra deep main hole HB, drilled for the German Continental Deep Drilling Program (KTB. All the regularity profiles independently obtained for the logs provide a clear correlation with lithology, and from each regularity profile, we derive a similar segmentation in terms of lithological units. The lithological discontinuities (strata' bounds and faults contacts are located at the local extrema of the Hurst functions. Moreover, the regularity profiles are compared with the KTB estimated porosity logs, showing a significant relation between the local extrema of the Hurst functions and the fluid-filled fractures. The Hurst function may then constitute a tool to characterize underground heterogeneities.

  16. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  17. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  18. Transition from regular to irregular reflection of cylindrical converging shock waves over convex obstacles

    Science.gov (United States)

    Vignati, F.; Guardone, A.

    2017-11-01

    An analytical model for the evolution of regular reflections of cylindrical converging shock waves over circular-arc obstacles is proposed. The model based on the new (local) parameter, the perceived wedge angle, which substitutes the (global) wedge angle of planar surfaces and accounts for the time-dependent curvature of both the shock and the obstacle at the reflection point, is introduced. The new model compares fairly well with numerical results. Results from numerical simulations of the regular to Mach transition—eventually occurring further downstream along the obstacle—point to the perceived wedge angle as the most significant parameter to identify regular to Mach transitions. Indeed, at the transition point, the value of the perceived wedge angle is between 39° and 42° for all investigated configurations, whereas, e.g., the absolute local wedge angle varies in between 10° and 45° in the same conditions.

  19. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    International Nuclear Information System (INIS)

    Hohage, Thorsten; Werner, Frank

    2016-01-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

  20. Long-period variations of wind parameters in the mesopause region and the solar cycle dependence

    International Nuclear Information System (INIS)

    Greisiger, K.M.; Schminder, R.; Kuerschner, D.

    1987-01-01

    The solar cycle dependence of wind parameters below 100 km on the basis of long term continuous ionospheric drift measurements in the low frequency range is discussed. For the meridional prevailing wind no significant variation was found. The same comparison as for winter was done for summer where the previous investigations gave no correlation. Now the radar meteor wind measurement values, too, showed a significant negative correlation of the zonal prevailing wind with solar activity for the years 1976 to 1983. The ionospheric drift measurement results of Collm have the same tendency but a larger dispersion due to the lower accuracy of the harmonic analysis because of the shorter daily measuring interval in summer. Continuous wind observations in the upper mesopause region over more than 20 years revealed distinct long term variations, the origin of which cannot be explained with the present knowledge

  1. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  2. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  3. Shift versus no-shift in local regularization of Chern-Simons theory

    International Nuclear Information System (INIS)

    Giavarini, G.; Parma Univ.; Martin, C.P.; Ruiz Ruiz, F.

    1994-01-01

    We consider a family of local BRS-invariant higher covariant derivative regularizations of SU(N) Chern-Simons theory that do not shift the value of the Chern-Simons parameter k to k + sign(k) c v at one loop. (orig.)

  4. A regularized relaxed ordered subset list-mode reconstruction algorithm and its preliminary application to undersampling PET imaging

    International Nuclear Information System (INIS)

    Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng

    2015-01-01

    List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)

  5. Variation of different humification parameters during two composting types with lignicellulosics residual of roses

    International Nuclear Information System (INIS)

    Farias Camero, Diana Maria; Ballesteros G, Maria Ines; Bendeck L, Myriam

    2000-01-01

    Two composting processes were carried out; they lasted for about 165 days. In one of the processes only microorganisms performed the decomposition of the material only (direct composting) and in the other one by microorganisms and earthworms -Eisenia foetida- (indirect composting) Periodical samples were taken from different places of the pile and a temperature control was made weekly. Organic total carbon was analyzed in each sample, an organic matter extraction and fractionation was carried out with a mixture 1 M sodium hydroxide and 1M-sodium pyrophosphate in each sample too. Organic total carbon was quantified in the separated fractions, humic extract, humic acids, fulvic acids and humines; different humification parameters were calculated as of those results: Humification ratio, humification index, polymerization ratio, percentage of humic acids and no extractable organic carbon -extractable carbon ratio. E4/E6 ratio, oxygen, hydrogen, nitrogen and carbon content, C/H, C/O and C/N ratios were analyzed on humic acids. Humification parameters variation allows us to analyze the humic substances transformation and formation dynamics are limited by composting system and temperature generated and maintained. It was established that extractable carbon percent and CNoExt/Cext ratio cannot be considered as satisfactory parameters in order to evaluate the stabilization compost degree; polymerization ratio and humification index are the most adequate parameters to determinate the material humification degree

  6. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  7. Revisiting Boltzmann learning: parameter estimation in Markov random fields

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik

    1996-01-01

    This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...

  8. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  9. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  10. An investigation of the general regularity of size dependence of reaction kinetics of nanoparticles

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Xue, Yongqiang; Li, Ping

    2015-01-01

    In the processes of preparation and application of nanomaterials, the chemical reactions of nanoparticles are often involved, and the size of nanoparticles has dramatic influence on the reaction kinetics. Nevertheless, there are many conflicts on regularities of size dependence of reaction kinetic parameters, and these conflicts have not been explained so far. In this paper, taking the reaction of nano-ZnO (average diameter is from 20.96 to 53.31 nm) with acrylic acid solution as a system, the influence regularities of the particle size on the kinetic parameters were researched. The regularities were consistent with that in most literatures, but inconsistent with that in a few of literatures, the reasons for the conflicts were interpreted. The reasons can be attributed to two factors: one is improper data processing for fewer data points, and the other is the difference between solid particles and porous particles. A general regularity of the size dependence of reaction kinetics for solid particles was obtained. The regularity shows that with the size of nanoparticles decreasing, the rate constant and the reaction order increase, while the apparent activation energy and the pre-exponential factor decrease; and the relationships of the logarithm of rate constant, the logarithm of pre-exponential factor, and the apparent activation energy to the reciprocal of the particle size are linear, respectively

  11. Effects of 12-week supervised treadmill training on spatio-temporal gait parameters in patients with claudication.

    Science.gov (United States)

    Konik, Anita; Kuklewicz, Stanisław; Rosłoniec, Ewelina; Zając, Marcin; Spannbauer, Anna; Nowobilski, Roman; Mika, Piotr

    2016-01-01

    The purpose of the study was to evaluate selected temporal and spatial gait parameters in patients with intermittent claudication after completion of 12-week supervised treadmill walking training. The study included 36 patients (26 males and 10 females) aged: mean 64 (SD 7.7) with intermittent claudication. All patients were tested on treadmill (Gait Trainer, Biodex). Before the programme and after its completion, the following gait biomechanical parameters were tested: step length (cm), step cycle (cycle/s), leg support time (%), coefficient of step variation (%) as well as pain-free walking time (PFWT) and maximal walking time (MWT) were measured. Training was conducted in accordance with the current TASC II guidelines. After 12 weeks of training, patients showed significant change in gait biomechanics consisting in decreased frequency of step cycle (p gait was more regular, which was expressed via statistically significant decrease of coefficient of variation (p 0.05). Twelve-week treadmill walking training programme may lead to significant improvement of temporal and spatial gait parameters in patients with intermittent claudication. Twelve-week treadmill walking training programme may lead to significant improvement of pain-free walking time and maximum walking time in patients with intermittent claudication.

  12. Major earthquakes occur regularly on an isolated plate boundary fault.

    Science.gov (United States)

    Berryman, Kelvin R; Cochran, Ursula A; Clark, Kate J; Biasi, Glenn P; Langridge, Robert M; Villamor, Pilar

    2012-06-29

    The scarcity of long geological records of major earthquakes, on different types of faults, makes testing hypotheses of regular versus random or clustered earthquake recurrence behavior difficult. We provide a fault-proximal major earthquake record spanning 8000 years on the strike-slip Alpine Fault in New Zealand. Cyclic stratigraphy at Hokuri Creek suggests that the fault ruptured to the surface 24 times, and event ages yield a 0.33 coefficient of variation in recurrence interval. We associate this near-regular earthquake recurrence with a geometrically simple strike-slip fault, with high slip rate, accommodating a high proportion of plate boundary motion that works in isolation from other faults. We propose that it is valid to apply time-dependent earthquake recurrence models for seismic hazard estimation to similar faults worldwide.

  13. Convex variational problems linear, nearly linear and anisotropic growth conditions

    CERN Document Server

    Bildhauer, Michael

    2003-01-01

    The author emphasizes a non-uniform ellipticity condition as the main approach to regularity theory for solutions of convex variational problems with different types of non-standard growth conditions. This volume first focuses on elliptic variational problems with linear growth conditions. Here the notion of a "solution" is not obvious and the point of view has to be changed several times in order to get some deeper insight. Then the smoothness properties of solutions to convex anisotropic variational problems with superlinear growth are studied. In spite of the fundamental differences, a non-uniform ellipticity condition serves as the main tool towards a unified view of the regularity theory for both kinds of problems.

  14. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  15. Pattern and Variation in the Timing of Aksak Meter: Commentary on Goldberg

    OpenAIRE

    Rainer Polak

    2016-01-01

    Daniel Goldberg (2015, this issue) explores relations between timing variations, grouping structure, and musical form in the percussive accompaniment of Balkan folk dance music. A chronometric re-analysis of one of the target article’s two audio samples finds a regular metric timing pattern to consistently underlie the variations Goldberg uncovered. Read together, the target article and this commentary demonstrate the complex interplay of a regular timing pattern with several levels of nuance...

  16. Ground Motion Prediction for Great Interplate Earthquakes in Kanto Basin Considering Variation of Source Parameters

    Science.gov (United States)

    Sekiguchi, H.; Yoshimi, M.; Horikawa, H.

    2011-12-01

    Broadband ground motions are estimated in the Kanto sedimentary basin which holds Tokyo metropolitan area inside for anticipated great interplate earthquakes along surrounding plate boundaries. Possible scenarios of great earthquakes along Sagami trough are modeled combining characteristic properties of the source area and adequate variation in source parameters in order to evaluate possible ground motion variation due to next Kanto earthquake. South to the rupture area of the 2011 Tohoku earthquake along the Japan trench, we consider possible M8 earthquake. The ground motions are computed with a four-step hybrid technique. We first calculate low-frequency ground motions at the engineering basement. We then calculate higher-frequency ground motions at the same position, and combine the lower- and higher-frequency motions using a matched filter. We finally calculate ground motions at the surface by computing the response of the alluvium-diluvium layers to the combined motions at the engineering basement.

  17. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    Specular surfaces can be measured with deflectometric methods. The solutions form a one-parameter family whose properties are discussed in this paper. We show in theory and experiment that the shape sensitivity of solutions decreases with growing distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard approach of regularization by specular stereo. © Oldenbourg Wissenschaftsverlag.

  18. Lead-position dependent regular oscillations and random fluctuations of conductance in graphene quantum dots

    International Nuclear Information System (INIS)

    Huang Liang; Yang Rui; Lai Yingcheng; Ferry, David K

    2013-01-01

    Quantum interference causes a wavefunction to have sensitive spatial dependence, and this has a significant effect on quantum transport. For example, in a quantum-dot system, the conductance can depend on the lead positions. We investigate, for graphene quantum dots, the conductance variations with the lead positions. Since for graphene the types of boundaries, e.g., zigzag and armchair, can fundamentally affect the quantum transport characteristics, we focus on rectangular graphene quantum dots, for which the effects of boundaries can be systematically studied. For both zigzag and armchair horizontal boundaries, we find that changing the positions of the leads can induce significant conductance variations. Depending on the Fermi energy, the variations can be either regular oscillations or random conductance fluctuations. We develop a physical theory to elucidate the origin of the conductance oscillation/fluctuation patterns. In particular, quantum interference leads to standing-wave-like-patterns in the quantum dot which, in the absence of leads, are regulated by the energy-band structure of the corresponding vertical graphene ribbon. The observed ‘coexistence’ of regular oscillations and random fluctuations in the conductance can be exploited for the development of graphene-based nanodevices. (paper)

  19. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  20. Speckle Noise Reduction via Nonconvex High Total Variation Approach

    Directory of Open Access Journals (Sweden)

    Yulian Wu

    2015-01-01

    Full Text Available We address the problem of speckle noise removal. The classical total variation is extensively used in this field to solve such problem, but this method suffers from the staircase-like artifacts and the loss of image details. In order to resolve these problems, a nonconvex total generalized variation (TGV regularization is used to preserve both edges and details of the images. The TGV regularization which is able to remove the staircase effect has strong theoretical guarantee by means of its high order smooth feature. Our method combines the merits of both the TGV method and the nonconvex variational method and avoids their main drawbacks. Furthermore, we develop an efficient algorithm for solving the nonconvex TGV-based optimization problem. We experimentally demonstrate the excellent performance of the technique, both visually and quantitatively.

  1. Sensitivity analysis in oxidation ditch modelling: the effect of variations in stoichiometric, kinetic and operating parameters on the performance indices

    NARCIS (Netherlands)

    Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.

    2001-01-01

    This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis

  2. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  3. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  4. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  5. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  6. Intelligent Controller Design for Quad-Rotor Stabilization in Presence of Parameter Variations

    Directory of Open Access Journals (Sweden)

    Oualid Doukhi

    2017-01-01

    Full Text Available The paper presents the mathematical model of a quadrotor unmanned aerial vehicle (UAV and the design of robust Self-Tuning PID controller based on fuzzy logic, which offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and parameter uncertainty. The proposed controller is applied to the inner and outer loop for heading and position trajectory tracking control to handle the external disturbances caused by the variation in the payload weight during the flight period. The results of the numerical simulation using gazebo physics engine simulator and real-time experiment using AR drone 2.0 test bed demonstrate the effectiveness of this intelligent control strategy which can improve the robustness of the whole system and achieve accurate trajectory tracking control, comparing it with the conventional proportional integral derivative (PID.

  7. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  8. Variations of {sup 57}Fe hyperfine parameters in medicaments containing ferrous fumarate and ferrous sulfate

    Energy Technology Data Exchange (ETDEWEB)

    Oshtrakh, M. I., E-mail: oshtrakh@mail.utnet.ru; Novikov, E. G. [Ural Federal University (The former Ural State Technical University-UPI), Faculty of Physical Techniques and Devices for Quality Control (Russian Federation); Dubiel, S. M. [AGH University of Science and Technology, Faculty of Physics and Computer Science (Poland); Semionkin, V. A. [Ural Federal University (The former Ural State Technical University-UPI), Faculty of Physical Techniques and Devices for Quality Control (Russian Federation)

    2010-04-15

    Several commercially available medicaments containing ferrous fumarate (FeC{sub 4}H{sub 2}O{sub 4}) and ferrous sulfate (FeSO{sub 4}), as a source of ferrous iron, were studied using a high velocity resolution Moessbauer spectroscopy. A comparison of the {sup 57}Fe hyperfine parameters revealed small variations for the main components in both medicaments indicating some differences in the ferrous fumarates and ferrous sulfates. It was also found that all spectra contained additional minor components probably related to ferrous and ferric impurities or to partially modified main components.

  9. Variational inequalities and flow in porous media

    International Nuclear Information System (INIS)

    Chipot, M.

    1984-01-01

    This book is concerned with regularity theory for obstacle problems, and with the dam problem, which, in the rectangular case, is one of the most interesting applications of variational inequalities with an obstacle

  10. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  11. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  12. Universal regularization prescription for Lovelock AdS gravity

    International Nuclear Information System (INIS)

    Kofinas, Georgios; Olea, Rodrigo

    2007-01-01

    A definite form for the boundary term that produces the finiteness of both the conserved quantities and Euclidean action for any Lovelock gravity with AdS asymptotics is presented. This prescription merely tells even from odd bulk dimensions, regardless the particular theory considered, what is valid even for Einstein-Hilbert and Einstein-Gauss-Bonnet AdS gravity. The boundary term is a given polynomial of the boundary extrinsic and intrinsic curvatures (also referred to as Kounterterms series). Only the coupling constant of the boundary term changes accordingly, such that it always preserves a well-posed variational principle for boundary conditions suitable for asymptotically AdS spaces. The background-independent conserved charges associated to asymptotic symmetries are found. In odd bulk dimensions, this regularization produces a generalized formula for the vacuum energy in Lovelock AdS gravity. The standard entropy for asymptotically AdS black holes is recovered directly from the regularization of the Euclidean action, and not only from the first law of thermodynamics associated to the conserved quantities

  13. Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

    Directory of Open Access Journals (Sweden)

    Andreas Langer

    2018-01-01

    Full Text Available In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution at all, i.e., the solution is the same whether a box-constraint is used or not. On the contrary, i.e., for applications where a box-constraint may have influence on the solution, we investigate how much it effects the quality of the restoration, especially when the regularization parameter, which weights the importance of the data term and the regularizer, is chosen suitable. In particular, for such applications, we consider the case of a squared L 2 data-fidelity term. For computing a minimizer of the respective box-constrained optimization problems a primal-dual semi-smooth Newton method is presented, which guarantees superlinear convergence.

  14. Variation in plasmonic (electronic) spectral parameters of Pr (III) and Nd (III) with varied concentration of moderators

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Shubha, E-mail: shubhamishra03@gmail.com [School of Studies in Physics, Vikram University, Ujjain (M. P.) (India); Limaye, S. N., E-mail: snl222@yahoo.co.in [Department of Chemistry, Dr. H.S. Gour University, A Central University, Sagar (M.P.) (India)

    2015-07-31

    It is said that the -4f shells behave as core and are least perturbed by changes around metal ion surrounding. However, there are evidences that-4f shells partially involved in direct moderator interaction. A systematic investigation on the plasmonic (electronic) spectral studies of some Rare Earths[RE(III).Mod] where, RE(III) = Pr(III),Nd(III) and Mod(moderator) = Y(III),La(III),Gd(III) and Lu(III), increased moderator concentration from 0.01 mol dm{sup −3} to 0.025 mol dm{sup −3} keeping the metal ion concentration at 0.01mol dm{sup −3} have been carried out. Variations in oscillator strengths (f), Judd-Ofelt parameters (T{sub λ}),inter-electronic repulsion Racah parameters (δE{sup k}),nephelauxetic ratio (β), radiative parameters (S{sub ED},A{sub T},β{sub R},T{sub R}). The values of oscillator strengths and Judd-Ofelt parameters have been discussed in the light of coordination number of RE(III) metal ions, denticity and basicity of the moderators. The [RE(III).Mod] bonding pattern has been studies in the light of the change in Racah parameters and nephelauxetic ratio.

  15. Variation in immune parameters and disease prevalence among Lesser Black-backed Gulls (Larus fuscus sp. with different migratory strategies.

    Directory of Open Access Journals (Sweden)

    Elena Arriero

    Full Text Available The ability to control infections is a key trait for migrants that must be balanced against other costly features of the migratory life. In this study we explored the links between migration and disease ecology by examining natural variation in parasite exposure and immunity in several populations of Lesser Black-backed Gulls (Larus fuscus with different migratory strategies. We found higher activity of natural antibodies in long distance migrants from the nominate subspecies L.f.fuscus. Circulating levels of IgY showed large variation at the population level, while immune parameters associated with antimicrobial activity showed extensive variation at the individual level irrespective of population or migratory strategy. Pathogen prevalence showed large geographical variation. However, the seroprevalence of one of the gull-specific subtypes of avian influenza (H16 was associated to the migratory strategy, with lower prevalence among the long-distance migrants, suggesting that migration may play a role in disease dynamics of certain pathogens at the population level.

  16. Application of a PID controller based on fuzzy logic to reduce variations in the control parameters in PWR reactors

    International Nuclear Information System (INIS)

    Vasconcelos, Wagner Eustaquio de; Lira, Carlos Alberto Brayner de Oliveira; Brito, Thiago Souza Pereira de; Afonso, Antonio Claudio Marques; Cruz Filho, Antonio Jose da; Marques, Jose Antonio; Teixeira, Marcello Goulart

    2013-01-01

    Nuclear reactors are in nature nonlinear systems and their parameters vary with time as a function of power level. These characteristics must be considered if large power variations occur in power plant operational regimes, such as in load-following conditions. A PWR reactor has a component called pressurizer, whose function is to supply the necessary high pressure for its operation and to contain pressure variations in the primary cooling system. The use of control systems capable of reducing fast variations of the operation variables and to maintain the stability of this system is of fundamental importance. The best-known controllers used in industrial control processes are proportional-integral-derivative (PID) controllers due to their simple structure and robust performance in a wide range of operating conditions. However, designing a fuzzy controller is seen to be a much less difficult task. Once a Fuzzy Logic controller is designed for a particular set of parameters of the nonlinear element, it yields satisfactory performance for a range of these parameters. The objective of this work is to develop fuzzy proportional-integral-derivative (fuzzy-PID) control strategies to control the level of water in the reactor. In the study of the pressurizer, several computer codes are used to simulate its dynamic behavior. At the fuzzy-PID control strategy, the fuzzy logic controller is exploited to extend the finite sets of PID gains to the possible combinations of PID gains in stable region. Thus the fuzzy logic controller tunes the gain of PID controller to adapt the model with changes in the water level of reactor. The simulation results showed a favorable performance with the use to fuzzy-PID controllers. (author)

  17. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H [Hallym University College of Medicine, Anyang (Korea, Republic of)

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined

  18. Haematological and electrocardiographic variations during menstrual cycle

    International Nuclear Information System (INIS)

    Rajnee, A.; Binawara, B.K.; Choudhary, S.; Chawla, V.K.; Choudhary, R.

    2010-01-01

    Menstruation coupled periodic bleeding from the blood vessels, at the time of shedding of the uterine mucosa has directed interest, more especially in the haematological changes during different phases of menstrual cycle. Methods: The present study was carried out on 30 healthy female medical students in the age group of 18 to 23 years with the normal menstrual cycle of 30 +- 3 days. The various haematological parameter and electrocardiography were studied on the second, eleventh, fourteenth and twenty second day of menstrual cycle. Result: The study reveals that the total leukocyte count and total platelet count significantly increased (p<0.001) around mid cycle, however total eosinophil count significantly decreased (p<0.05) during the same period. Differential leukocyte count, bleeding time, clotting time, heart rate, P-R interval and Q-T interval did not show any significant change during different phases of menstrual cycle, although some mild changes were observed. Conclusion: This study was a moderate attempt to determine regular variation in the different haematological parameters and ECG during the different phases of menstrual cycle in normal healthy females and evaluate conflicting reports on the subjects. (author)

  19. Assessment of thermodynamic parameters of plasma shock wave

    International Nuclear Information System (INIS)

    Vasileva, O V; Isaev, Yu N; Budko, A A; Filkov, A I

    2014-01-01

    The work is devoted to the solution of the one-dimensional equation of hydraulic gas dynamics for the coaxial magneto plasma accelerator by means of Lax-Wendroff modified algorithm with optimum choice of the regularization parameter artificial viscosity. Replacement of the differential equations containing private derivatives is made by finite difference method. Optimum parameter of regularization artificial viscosity is added using the exact known decision of Soda problem. The developed algorithm of thermodynamic parameter calculation in a braking point is proved. Thermodynamic parameters of a shock wave in front of the plasma piston of the coaxial magneto plasma accelerator are calculated on the basis of the offered algorithm. Unstable high-frequency fluctuations are smoothed using modeling and that allows narrowing the ambiguity area. Results of calculation of gas dynamic parameters in a point of braking coincide with literary data. The chart 3 shows the dynamics of change of speed and thermodynamic parameters of a shock wave such as pressure, density and temperature just before the plasma piston

  20. Regularized friction and continuation: Comparison with Coulomb's law

    OpenAIRE

    Vigué, Pierre; Vergez, Christophe; Karkar, Sami; Cochelin, Bruno

    2016-01-01

    International audience; Periodic solutions of systems with friction are difficult to investigate because of the irregular nature of friction laws. This paper examines periodic solutions and most notably stick-slip, on a simple one-degre-of-freedom system (mass, spring, damper, belt), with Coulomb's friction law, and with a regularized friction law (i.e. the friction coefficient becomes a function of relative speed, with a stiffness parameter). With Coulomb's law, the stick-slip solution is co...

  1. Characterization of PDMS samples with variation of its synthesis parameters for tunable optics applications

    Science.gov (United States)

    Marquez-Garcia, Josimar; Cruz-Félix, Angel S.; Santiago-Alvarado, Agustin; González-García, Jorge

    2017-09-01

    Nowadays the elastomer known as polydimethylsiloxane (PDMS, Sylgard 184), due to its physical properties, low cost and easy handle, have become a frequently used material for the elaboration of optical components such as: variable focal length liquid lenses, optical waveguides, solid elastic lenses, etc. In recent years, we have been working in the characterization of this material for applications in visual sciences; in this work, we describe the elaboration of PDMSmade samples, also, we present physical and optical properties of the samples by varying its synthesis parameters such as base: curing agent ratio, and both, curing time and temperature. In the case of mechanical properties, tensile and compression tests were carried out through a universal testing machine to obtain the respective stress-strain curves, and to obtain information regarding its optical properties, UV-vis spectroscopy is applied to the samples to obtain transmittance and absorbance curves. Index of refraction variation was obtained through an Abbe refractometer. Results from the characterization will determine the proper synthesis parameters for the elaboration of tunable refractive surfaces for potential applications in robotics.

  2. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  3. Quantifying the performance of in vivo portal dosimetry in detecting four types of treatment parameter variations

    International Nuclear Information System (INIS)

    Bojechko, C.; Ford, E. C.

    2015-01-01

    Purpose: To quantify the ability of electronic portal imaging device (EPID) dosimetry used during treatment (in vivo) in detecting variations that can occur in the course of patient treatment. Methods: Images of transmitted radiation from in vivo EPID measurements were converted to a 2D planar dose at isocenter and compared to the treatment planning dose using a prototype software system. Using the treatment planning system (TPS), four different types of variability were modeled: overall dose scaling, shifting the positions of the multileaf collimator (MLC) leaves, shifting of the patient position, and changes in the patient body contour. The gamma pass rate was calculated for the modified and unmodified plans and used to construct a receiver operator characteristic (ROC) curve to assess the detectability of the different parameter variations. The detectability is given by the area under the ROC curve (AUC). The TPS was also used to calculate the impact of the variations on the target dose–volume histogram. Results: Nine intensity modulation radiation therapy plans were measured for four different anatomical sites consisting of 70 separate fields. Results show that in vivo EPID dosimetry was most sensitive to variations in the machine output, AUC = 0.70 − 0.94, changes in patient body habitus, AUC = 0.67 − 0.88, and systematic shifts in the MLC bank positions, AUC = 0.59 − 0.82. These deviations are expected to have a relatively small clinical impact [planning target volume (PTV) D 99 change <7%]. Larger variations have even higher detectability. Displacements in the patient’s position and random variations in MLC leaf positions were not readily detectable, AUC < 0.64. The D 99 of the PTV changed by up to 57% for the patient position shifts considered here. Conclusions: In vivo EPID dosimetry is able to detect relatively small variations in overall dose, systematic shifts of the MLC’s, and changes in the patient habitus. Shifts in the patient

  4. Quantifying the performance of in vivo portal dosimetry in detecting four types of treatment parameter variations

    Energy Technology Data Exchange (ETDEWEB)

    Bojechko, C.; Ford, E. C., E-mail: eford@uw.edu [Department of Radiation Oncology, University of Washington, 1959 NE Pacific Street, Seattle, Washington 98195 (United States)

    2015-12-15

    Purpose: To quantify the ability of electronic portal imaging device (EPID) dosimetry used during treatment (in vivo) in detecting variations that can occur in the course of patient treatment. Methods: Images of transmitted radiation from in vivo EPID measurements were converted to a 2D planar dose at isocenter and compared to the treatment planning dose using a prototype software system. Using the treatment planning system (TPS), four different types of variability were modeled: overall dose scaling, shifting the positions of the multileaf collimator (MLC) leaves, shifting of the patient position, and changes in the patient body contour. The gamma pass rate was calculated for the modified and unmodified plans and used to construct a receiver operator characteristic (ROC) curve to assess the detectability of the different parameter variations. The detectability is given by the area under the ROC curve (AUC). The TPS was also used to calculate the impact of the variations on the target dose–volume histogram. Results: Nine intensity modulation radiation therapy plans were measured for four different anatomical sites consisting of 70 separate fields. Results show that in vivo EPID dosimetry was most sensitive to variations in the machine output, AUC = 0.70 − 0.94, changes in patient body habitus, AUC = 0.67 − 0.88, and systematic shifts in the MLC bank positions, AUC = 0.59 − 0.82. These deviations are expected to have a relatively small clinical impact [planning target volume (PTV) D{sub 99} change <7%]. Larger variations have even higher detectability. Displacements in the patient’s position and random variations in MLC leaf positions were not readily detectable, AUC < 0.64. The D{sub 99} of the PTV changed by up to 57% for the patient position shifts considered here. Conclusions: In vivo EPID dosimetry is able to detect relatively small variations in overall dose, systematic shifts of the MLC’s, and changes in the patient habitus. Shifts in the

  5. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  6. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  7. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    International Nuclear Information System (INIS)

    Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H

    2014-01-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of

  8. Non regular variations in the LOD from European medieval eclipses

    Science.gov (United States)

    Martinez, M. J.; Marco, F. J.

    2012-12-01

    The study of ancient eclipses has demonstrated its utility to approximate some astronomical constants, in particular in the field of the Earth's rotation. It is a well known fact that the rate of rotation of the Earth is slowly decreasing in time. There are many possible reasons for this fact, including internal and external mechanisms. The most important external causes are lunar and solar tides. While internal causes can be very diverse: examples of short term effects are changing wind patterns, electromagnetic coupling between the fluid core of the Earth and the lower mantle, while sea-level fluctuations associated with climatic variations are examples of long time effects. In any case, the most important cause is the tidal friction.

  9. Real-time simulation of response to load variation for a ship reactor based on point-reactor double regions and lumped parameter model

    Energy Technology Data Exchange (ETDEWEB)

    Wang Qiao; Zhang De [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Wenzhen, E-mail: Cwz2@21cn.com [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Zhiyun [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China)

    2011-05-15

    Research highlights: > We calculate the variation of main parameters of the reactor core by the Simulink. > The Simulink calculation software (SCS) can deal well with the stiff problem. > The high calculation precision is reached with less time, and the results can be easily displayed. > The quick calculation of ship reactor transient can be achieved by this method. - Abstract: Based on the point-reactor double regions and lumped parameter model, while the nuclear power plant second loop load is increased or decreased quickly, the Simulink calculation software (SCS) is adopted to calculate the variation of main physical and thermal-hydraulic parameters of the reactor core. The calculation results are compared with those of three-dimensional simulation program. It is indicated that the SCS can deal well with the stiff problem of the point-reactor kinetics equations and the coupled problem of neutronics and thermal-hydraulics. The high calculation precision can be reached with less time, and the quick calculation of parameters of response to load disturbance for the ship reactor can be achieved. The clear image of the calculation results can also be displayed quickly by the SCS, which is very significant and important to guarantee the reactor safety operation.

  10. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    Science.gov (United States)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  11. Joint analysis of short-period variations of ionospheric parameters in Siberia and the Far East and processes of the tropical cyclogenesis

    Science.gov (United States)

    Chernigovskaya, M. A.; Kurkin, V. I.; Orlov, I. I.; Sharkov, E. A.; Pokrovskaya, I. V.

    2009-04-01

    In this work a possibility of manifestation of strong meteorological disturbances in the Earth lower atmosphere in variations of ionospheric parameters in the zone remote from the disturbance source has been studied. The spectral analysis of short-period variations (about ten minutes, hours) in maximum observed frequencies (MOF) of one-skip signals of oblique sounding has been carried out. These variations were induced by changes in the upper atmosphere parameters along the Magadan-Irkutsk oblique-incidence sounding path on the background of diurnal variations in the parameter under study. Data on MOF measurements with off-duty factor approximately 5 min in equinoxes (September, March) of 2005-2007 were used. The analysis was made using the improved ISTP-developed technique of determining periodicities in time series. The increase of signal spectrum energy at certain frequencies is interpreted as manifestation of traveling ionospheric disturbances (TID) associated with propagation of internal gravity waves in the atmosphere. The analysis revealed TIDs of temporal scales under consideration. The question concerning localization of possible sources of revealed disturbances is discussed. Troposphere meteorological disturbances giant in their energy (tropical cyclones, typhoon) are considered as potential sources of observable TIDs. The needed information on tropical cyclones that occurred in the north area of the Indian Ocean, south-west and central areas of the Pacific Ocean in 2005-2007 is taken from the electron base of satellite data on the global tropical cyclogenesis "Global-TC" (ISR RAS). In order to effectively separate disturbances associated with the magnetospheric-ionospheric interaction and disturbances induced by the lower atmosphere influence on the upper atmosphere, we analyze the tropical cyclogenesis events that occurred in quiet helio-geomagnetic conditions. The study was supported by the Program of RAS Presidium N 16 (Part 3) and the RFBR Grant N 08-05-00658.

  12. Variation of thermal parameters in two different color morphs of a diurnal poison toad, Melanophryniscus rubriventris (Anura: Bufonidae).

    Science.gov (United States)

    Sanabria, Eduardo A; Vaira, Marcos; Quiroga, Lorena B; Akmentins, Mauricio S; Pereyra, Laura C

    2014-04-01

    We study the variation in thermal parameters in two contrasting populations Yungas Redbelly Toads (Melanophryniscus rubriventris) with different discrete color phenotypes comparing field body temperatures, critical thermal maximum and heating rates. We found significant differences in field body temperatures of the different morphs. Temperatures were higher in toads with a high extent of dorsal melanization. No variation was registered in operative temperatures between the study locations at the moment of capture and processing. Critical thermal maximum of toads was positively related with the extent of dorsal melanization. Furthermore, we founded significant differences in heating rates between morphs, where individuals with a high extent of dorsal melanization showed greater heating rates than toads with lower dorsal melanization. The color pattern-thermal parameter relationship observed may influence the activity patterns and body size of individuals. Body temperature is a modulator of physiological and behavioral functions in amphibians, influencing daily and seasonal activity, locomotor performance, digestion rate and growth rate. It is possible that some growth constraints may arise due to the relationship of color pattern-metabolism allowing different morphs to attain similar sizes at different locations instead of body-size clines. Copyright © 2014. Published by Elsevier Ltd.

  13. Salt-body Inversion with Minimum Gradient Support and Sobolev Space Norm Regularizations

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Full-waveform inversion (FWI) is a technique which solves the ill-posed seismic inversion problem of fitting our model data to the measured ones from the field. FWI is capable of providing high-resolution estimates of the model, and of handling wave propagation of arbitrary complexity (visco-elastic, anisotropic); yet, it often fails to retrieve high-contrast geological structures, such as salt. One of the reasons for the FWI failure is that the updates at earlier iterations are too smooth to capture the sharp edges of the salt boundary. We compare several regularization approaches, which promote sharpness of the edges. Minimum gradient support (MGS) regularization focuses the inversion on blocky models, even more than the total variation (TV) does. However, both approaches try to invert undesirable high wavenumbers in the model too early for a model of complex structure. Therefore, we apply the Sobolev space norm as a regularizing term in order to maintain a balance between sharp and smooth updates in FWI. We demonstrate the application of these regularizations on a Marmousi model, enriched by a chunk of salt. The model turns out to be too complex in some parts to retrieve its full velocity distribution, yet the salt shape and contrast are retrieved.

  14. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study

    Science.gov (United States)

    Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei

    2018-04-01

    Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction

  15. The three-point function in split dimensional regularization in the Coulomb gauge

    International Nuclear Information System (INIS)

    Leibbrandt, G.

    1998-01-01

    We use a gauge-invariant regularization procedure, called split dimensional regularization, to evaluate the quark self-energy Σ(p) and quark-quark-gluon vertex function Λ μ (p',p) in the Coulomb gauge, ∇-vector.A - vectora=0. The technique of split dimensional regularization was designed to regulate Coulomb-gauge Feynman integrals in non-Abelian theories. The technique which is based on two complex regulating parameters, ω and σ, is shown to generate a well-defined set of Coulomb-gauge integrals. A major component of this project deals with the evaluation of four-propagator and five-propagator Coulomb integrals, some of which are non-local. It is further argued that the standard one-loop BRST identity relating Σ and Λ μ , should by rights be replaced by a more general BRST identity which contains two additional contributions from ghost vertex diagrams. Despite the appearance of non-local Coulomb integrals, both Σ and Λ μ are local functions which satisfy the appropriate BRST identity. Application of split dimensional regularization to two-loop energy integrals is briefly discussed. (orig.)

  16. Models of Solar Irradiance Variations: Current Status Natalie A ...

    Indian Academy of Sciences (India)

    Abstract. Regular monitoring of solar irradiance has been carried out since 1978 to show that solar total and spectral irradiance varies at different time scales. Whereas variations on time scales of minutes to hours are due to solar oscillations and granulation, variations on longer time scales are driven by the evolution of the ...

  17. Solar cooling. Dynamic computer simulations and parameter variations; Solare Kuehlung. Dynamische Rechnersimulationen und Parametervariationen

    Energy Technology Data Exchange (ETDEWEB)

    Adam, Mario; Lohmann, Sandra [Fachhochschule Duesseldorf (Germany). E2 - Erneuerbare Energien und Energieeffizienz

    2011-05-15

    The research project 'Solar cooling in the Hardware-in-the-Loop-Test' is funded by the BMBF and deals with the modeling of a pilot plant for solar cooling with the 17.5 kW absorption chiller of Yazaki in the simulation environment of MATLAB/ Simulink with the toolboxes Stateflow and CARNOT. Dynamic simulations and parameter variations according to the work-efficient methodology of design of experiments are used to select meaningful system configurations, control strategies and dimensioning of the components. The results of these simulations will be presented and a view of the use of acquired knowledge for the planned laboratory field tests on a hardware-in-the-loop test stand will be given. (orig.)

  18. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    Bildhauer, Michael; Fuchs, Martin

    2012-01-01

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  19. Timetable Attractiveness Parameters

    DEFF Research Database (Denmark)

    Schittenhelm, Bernd

    2008-01-01

    Timetable attractiveness is influenced by a set of key parameters that are described in this article. Regarding the superior structure of the timetable, the trend in Europe goes towards periodic regular interval timetables. Regular departures and focus on optimal transfer possibilities make...... these timetables attractive. The travel time in the timetable depends on the characteristics of the infrastructure and rolling stock, the heterogeneity of the planned train traffic and the necessary number of transfers on the passenger’s journey. Planned interdependencies between trains, such as transfers...... and heterogeneous traffic, add complexity to the timetable. The risk of spreading initial delays to other trains and parts of the network increases with the level of timetable complexity....

  20. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  1. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  2. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  3. Contribution of long-term accounting for raindrop size distribution variations on quantitative precipitation estimation by weather radar: Disdrometers vs parameter optimization

    Science.gov (United States)

    Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.

    2015-12-01

    Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However

  4. 27-day variation in solar-terrestrial parameters: Global characteristics and an origin based approach of the signals

    Science.gov (United States)

    Poblet, Facundo L.; Azpilicueta, Francisco

    2018-05-01

    The Earth and the near interplanetary medium are affected by the Sun in different ways. Those processes generated in the Sun that induce perturbations into the Magnetosphere-Ionosphere system are called geoeffective processes and show a wide range of temporal variations, like the 11-year solar cycle (long term variations), the variation of ∼27 days (recurrent variations), solar storms enduring for some days, particle acceleration events lasting for some hours, etc. In this article, the periodicity of ∼27 days associated with the solar synodic rotation period is investigated. The work is mainly focused on studying the resulting 27-day periodic signal in the magnetic activity, by the analysis of the horizontal component of the magnetic field registered on a set of 103 magnetic observatories distributed around the world. For this a new method to isolate the periodicity of interest has been developed consisting of two main steps: the first one consists of removing the linear trend corresponding to every calendar year from the data series, and the second one of removing from the resulting series a smoothed version of it obtained by applying a 30-day moving average. The result at the end of this process is a data series in which all the signal with periods larger than 30 days are canceled. The most important characteristics observed in the resulting signals are two main amplitude modulations: the first and most prominent related to the 11-year solar cycle and the second one with a semiannual pattern. In addition, the amplitude of the signal shows a dependence on the geomagnetic latitude of the observatory with a significant discontinuity at approx. ±60°. The processing scheme was also applied to other parameters that are widely used to characterize the energy transfer from the Sun to the Earth: F10.7 and Mg II indices and the ionospheric vertical total electron content (vTEC) were considered for radiative interactions; and the solar wind velocity for the non

  5. An algorithmic framework for Mumford–Shah regularization of inverse problems in imaging

    International Nuclear Information System (INIS)

    Hohm, Kilian; Weinmann, Andreas; Storath, Martin

    2015-01-01

    The Mumford–Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford–Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible. (paper)

  6. Tumor response parameters for head and neck cancer derived from tumor-volume variation during radiation therapy

    International Nuclear Information System (INIS)

    Chvetsov, Alexei V.

    2013-01-01

    -and-neck squamous cell carcinoma (SCC) is equal to 3.8 mean potential doubling times, which agrees with 4.0 mean potential doubling times obtained previously for lung SCC. Conclusions: The distribution of cell survival fractions obtained in this study support the hypothesis that the tumor-volume variation during radiotherapy treatment for head and neck cancer can be described by the two-level cell population tumor-volume model. This model can be used for in vivo evaluation of patient-specific radiobiological parameters that are needed for tumor-control probability evaluation.

  7. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  8. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  9. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Science.gov (United States)

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  10. Variation of the Moyer Model Parameter, H0, with primary proton energy

    International Nuclear Information System (INIS)

    Liu, K.L.; Stevenson, G.R.; Thomas, R.H.; Thomas, S.V.

    1982-08-01

    Experimental values of the Moyer Model Parameter H 0 were summarized and presented as a function of proton energy, E/sub p/. The variation of H 0 (E/sup p/) with E/sup p/ was studied by regression analysis. Regression Analysis of the data under log-log transformation gave a best value for the exponent m of 0.77 +- 0.26, but a t-test did not reject m = 1 (p +- 20%). Since m = 1 was not excluded, and a Fisher's F-test did not exclude linearity, a linear regression analysis was performed. A line passing through the origin was not rejected (Student's t-test, p = 30%) and has the equation: H 0 (E/sup p/ = (1.61 +- 0.19) x 10 -13 Sv.m 2 /GeV to be compared with a value of (1.65 +- 0.21) x 10 -13 Sv.m 2 /GeV published by Stevenson et al

  11. Variation in the Kozak sequence of WNT16 results in an increased translation and is associated with osteoporosis related parameters

    DEFF Research Database (Denmark)

    Hendrickx, Gretl; Boudin, Eveline; Fijałkowski, Igor

    2014-01-01

    on osteoporosis related parameters. Hereto, we performed a WNT16 candidate gene association study in a population of healthy Caucasian men from the Odense Androgen Study (OAS). Using HapMap, five tagSNPs and one multimarker test were selected for genotyping to cover most of the common genetic variation...

  12. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  13. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  14. Regularization based on steering parameterized Gaussian filters and a Bhattacharyya distance functional

    Science.gov (United States)

    Lopes, Emerson P.

    2001-08-01

    Template regularization embeds the problem of class separability. In the machine vision perspective, this problem is critical when a textural classification procedure is applied to non-stationary pattern mosaic images. These applications often present low accuracy performance due to disturbance of the classifiers produced by exogenous or endogenous signal regularity perturbations. Natural scene imaging, where the images present certain degree of homogeneity in terms of texture element size or shape (primitives) shows a variety of behaviors, especially varying the preferential spatial directionality. The space-time image pattern characterization is only solved if classification procedures are designed considering the most robust tools within a parallel and hardware perspective. The results to be compared in this paper are obtained using a framework based on multi-resolution, frame and hypothesis approach. Two strategies for the bank of Gabor filters applications are considered: adaptive strategy using the KL transform and fix configuration strategy. The regularization under discussion is accomplished in the pyramid building system instance. The filterings are steering Gaussians controlled by free parameters which are adjusted in accordance with a feedback process driven by hints obtained from sequence of frames interaction functionals pos-processed in the training process and including classification of training set samples as examples. Besides these adjustments there is continuous input data sensitive adaptiveness. The experimental result assessments are focused on two basic issues: Bhattacharyya distance as pattern characterization feature and the combination of KL transform as feature selection and adaptive criterion with the regularization of the pattern Bhattacharyya distance functional (BDF) behavior, using the BDF state separability and symmetry as the main indicators of an optimum framework parameter configuration.

  15. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  16. Variation of intrinsic magnetic parameters of single domain Co-N interstitial nitrides synthesized via hexa-ammine cobalt nitrate route

    Energy Technology Data Exchange (ETDEWEB)

    Ningthoujam, R.S. [Department of Chemistry, Indian Institute of Technology, Kanpur 208016 (India); Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Panda, R.N., E-mail: rnp@bits-goa.ac.in [Chemistry Group, Birla Institute of Technology and Science-Pilani, Goa Campus, Zuari Nagar, Goa 403726 (India); Gajbhiye, N.S. [Department of Chemistry, Indian Institute of Technology, Kanpur 208016 (India)

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer Variation of intrinsic magnetic parameters of Co-N. Black-Right-Pointing-Pointer Synthesis by hexa-ammine cobalt complex route. Black-Right-Pointing-Pointer Tuning of coercivity by variation of size. - Abstract: We report the variation of Curie temperature (T{sub c}) and coercivity (H{sub c}) of the single domain Co-N interstitial materials synthesized via nitridation of the hexa-ammine Cobalt(III) nitrate complex at 673 K. Co-N materials crystallize in the fcc cubic structure with unit cell parameter, a = 3.552 Angstrom-Sign . The X-ray diffraction (XRD) peaks are broader indicating the materials to be nano-structured with crystallite sizes of 5-14 nm. The scanning electron microscopy (SEM) and transmission electron microscopy (TEM) studies confirm the nanocrystalline nature of the materials. TEM images show chain-like clusters indicating dipolar interactions between the particles. Magnetic studies focus on the existence of giant magnetic Co atoms in the Co-N lattice that are not influenced by the thermal relaxation. The values of the H{sub c} could be tuned with the dimension of the particles. The values of T{sub c} of the nitride materials are masked by the onset of the ferromagnetic to superparamagnetic transition at higher temperatures. Thermomagnetic studies show an increasing trend in the Curie temperature, T{sub c}, with decrease in particle dimension. This result has been explained qualitatively on the basis of ferromagnetic to superparamagnetic transition and finite size scaling effects.

  17. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  18. Pattern and Variation in the Timing of Aksak Meter: Commentary on Goldberg

    Directory of Open Access Journals (Sweden)

    Rainer Polak

    2016-01-01

    Full Text Available Daniel Goldberg (2015, this issue explores relations between timing variations, grouping structure, and musical form in the percussive accompaniment of Balkan folk dance music. A chronometric re-analysis of one of the target article’s two audio samples finds a regular metric timing pattern to consistently underlie the variations Goldberg uncovered. Read together, the target article and this commentary demonstrate the complex interplay of a regular timing pattern with several levels of nuanced variation to be performed with fluency, flexibility, and accuracy. This might appear commonplace, but here it is observed in the context of an asymmetric rhythmic mode, non-isochronous beat sequence, and asymmetric metric hierarchy. This context evidently does not represent a constraint of any sort in respect to the rhythmic timing performance, which casts doubts on the deep-seated assumption that metric regularity depends on iso-periodicity and vertical symmetry. This assumption is sometimes explicitly and often implicitly taken as universal; this comment suggests that, on the contrary, it might well be culturally biased.

  19. Phase-modified CTQW unable to distinguish strongly regular graphs efficiently

    International Nuclear Information System (INIS)

    Mahasinghe, A; Wijerathna, J K; Izaac, J A; Wang, J B

    2015-01-01

    Various quantum walk-based algorithms have been developed, aiming to distinguish non-isomorphic graphs with polynomial scaling, within both the discrete-time quantum walk (DTQW) and continuous-time quantum walk (CTQW) frameworks. Whilst both the single-particle DTQW and CTQW have failed to distinguish non-isomorphic strongly regular graph families (prompting the move to multi-particle graph isomorphism (GI) algorithms), the single-particle DTQW has been successfully modified by the introduction of a phase factor to distinguish a wide range of graphs in polynomial time. In this paper, we prove that an analogous phase modification to the single particle CTQW does not have the same distinguishing power as its discrete-time counterpart, in particular it cannot distinguish strongly regular graphs with the same family parameters with the same efficiency. (paper)

  20. Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes

    Science.gov (United States)

    Stuchlík, Zdeněk; Schee, Jan

    2015-12-01

    In this paper, we study circular geodesic motion of test particles and photons in the Bardeen and Ayon-Beato-Garcia (ABG) geometry describing spherically symmetric regular black-hole or no-horizon spacetimes. While the Bardeen geometry is not exact solution of Einstein's equations, the ABG spacetime is related to self-gravitating charged sources governed by Einstein's gravity and nonlinear electrodynamics. They both are characterized by the mass parameter m and the charge parameter g. We demonstrate that in similarity to the Reissner-Nordstrom (RN) naked singularity spacetimes an antigravity static sphere should exist in all the no-horizon Bardeen and ABG solutions that can be surrounded by a Keplerian accretion disc. However, contrary to the RN naked singularity spacetimes, the ABG no-horizon spacetimes with parameter g/m > 2 can contain also an additional inner Keplerian disc hidden under the static antigravity sphere. Properties of the geodesic structure are reflected by simple observationally relevant optical phenomena. We give silhouette of the regular black-hole and no-horizon spacetimes, and profiled spectral lines generated by Keplerian rings radiating at a fixed frequency and located in strong gravity region at or nearby the marginally stable circular geodesics. We demonstrate that the profiled spectral lines related to the regular black-holes are qualitatively similar to those of the Schwarzschild black-holes, giving only small quantitative differences. On the other hand, the regular no-horizon spacetimes give clear qualitative signatures of their presence while compared to the Schwarschild spacetimes. Moreover, it is possible to distinguish the Bardeen and ABG no-horizon spacetimes, if the inclination angle to the observer is known.

  1. Analysis of the spatial variation in the parameters of the SWAT model with application in Flanders, Northern Belgium

    Directory of Open Access Journals (Sweden)

    G. Heuvelmans

    2004-01-01

    Full Text Available Operational applications of a hydrological model often require the prediction of stream flow in (future time periods without stream flow observations or in ungauged catchments. Data for a case-specific optimisation of model parameters are not available for such applications, so parameters have to be derived from other catchments or time periods. It has been demonstrated that for applications of the SWAT in Northern Belgium, temporal transfers of the parameters have less influence than spatial transfers on the performance of the model. This study examines the spatial variation in parameter optima in more detail. The aim was to delineate zones wherein model parameters can be transferred without a significant loss of model performance. SWAT was calibrated for 25 catchments that are part of eight larger sub-basins of the Scheldt river basin. Two approaches are discussed for grouping these units in zones with a uniform set of parameters: a single parameter approach considering each parameter separately and a parameter set approach evaluating the parameterisation as a whole. For every catchment, the SWAT model was run with the local parameter optima, with the average parameter values for the entire study region (Flanders, with the zones delineated with the single parameter approach and with the zones obtained by the parameter set approach. Comparison of the model performances of these four parameterisation strategies indicates that both the single parameter and the parameter set zones lead to stream flow predictions that are more accurate than if the entire study region were treated as one single zone. On the other hand, the use of zonal average parameter values results in a considerably worse model fit compared to local parameter optima. Clustering of parameter sets gives a more accurate result than the single parameter approach and is, therefore, the preferred technique for use in the parameterisation of ungauged sub-catchments as part of the

  2. The Effect of Regular Physical Education in the Transformation Motor Development of Children with Special Needs

    Directory of Open Access Journals (Sweden)

    Danilo Bojanić

    2016-02-01

    Full Text Available The aim of the research is to determine the level of quantitative changes of motor abilities of pupils with special needs under the influence of kinetic activity regular physical education teaching. The survey was conducted on students of the Centre for children and youth with special needs in Mostar, the city of Los Rosales in Mostar and day care facilities for children with special needs in Niksic. The sample was composed of boys of 46 subjects, who were involved in regular physical education for a period of one school year. The level of quantitative and qualitative changes in motor skills, written under the influence of kinesiology operators within regular school physical education classes, was estimated by applying appropriate tests of motor skills, selected in accordance with the degree of mental ability and biological age. Manifest variables applied in this experiment were processed using standard descriptive methods in order to determine their distribution function and basic function parameters. Comparisons of results of measures of central dispersion parameters initial and final measurement, it is evident that the applied program of physical education and sport contribute to changing the distribution of central and dispersion parameters, and that the same distribution of the final measurement closer to the normal distribution of results.

  3. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  4. Thermodynamic Product Relations for Generalized Regular Black Hole

    International Nuclear Information System (INIS)

    Pradhan, Parthapratim

    2016-01-01

    We derive thermodynamic product relations for four-parametric regular black hole (BH) solutions of the Einstein equations coupled with a nonlinear electrodynamics source. The four parameters can be described by the mass (m), charge (q), dipole moment (α), and quadrupole moment (β), respectively. We study its complete thermodynamics. We compute different thermodynamic products, that is, area product, BH temperature product, specific heat product, and Komar energy product, respectively. Furthermore, we show some complicated function of horizon areas that is indeed mass-independent and could turn out to be universal.

  5. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  6. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  7. On the regularity of the extinction probability of a branching process in varying and random environments

    International Nuclear Information System (INIS)

    Alili, Smail; Rugh, Hans Henrik

    2008-01-01

    We consider a supercritical branching process in time-dependent environment ξ. We assume that the offspring distributions depend regularly (C k or real-analytically) on real parameters λ. We show that the extinction probability q λ (ξ), given the environment ξ 'inherits' this regularity whenever the offspring distributions satisfy a condition of contraction-type. Our proof makes use of the Poincaré metric on the complex unit disc and a real-analytic implicit function theorem

  8. Employment of single-diode model to elucidate the variations in photovoltaic parameters under different electrical and thermal conditions.

    Directory of Open Access Journals (Sweden)

    Fahmi F Muhammad

    Full Text Available In this research work, numerical simulations are performed to correlate the photovoltaic parameters with various internal and external factors influencing the performance of solar cells. Single-diode modeling approach is utilized for this purpose and theoretical investigations are compared with the reported experimental evidences for organic and inorganic solar cells at various electrical and thermal conditions. Electrical parameters include parasitic resistances (Rs and Rp and ideality factor (n, while thermal parameters can be defined by the cells temperature (T. A comprehensive analysis concerning broad spectral variations in the short circuit current (Isc, open circuit voltage (Voc, fill factor (FF and efficiency (η is presented and discussed. It was generally concluded that there exists a good agreement between the simulated results and experimental findings. Nevertheless, the controversial consequence of temperature impact on the performance of organic solar cells necessitates the development of a complementary model which is capable of well simulating the temperature impact on these devices performance.

  9. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    Science.gov (United States)

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  10. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  11. Time series analyses of hydrological parameter variations and their correlations at a coastal area in Busan, South Korea

    Science.gov (United States)

    Chung, Sang Yong; Senapathi, Venkatramanan; Sekar, Selvam; Kim, Tae Hyung

    2018-02-01

    Monitoring and time-series analysis of the hydrological parameters electrical conductivity (EC), water pressure, precipitation and tide were carried out, to understand the characteristics of the parameter variations and their correlations at a coastal area in Busan, South Korea. The monitoring data were collected at a sharp interface between freshwater and saline water at the depth of 25 m below ground. Two well-logging profiles showed that seawater intrusion has largely expanded (progressed inland), and has greatly affected the groundwater quality in a coastal aquifer of tuffaceous sedimentary rock over a 9-year period. According to the time series analyses, the periodograms of the hydrological parameters present very similar trends to the power spectral densities (PSD) of the hydrological parameters. Autocorrelation functions (ACF) and partial autocorrelation functions (PACF) of the hydrological parameters were produced to evaluate their self-correlations. The ACFs of all hydrologic parameters showed very good correlation over the entire time lag, but the PACF revealed that the correlations were good only at time lag 1. Crosscorrelation functions (CCF) were used to evaluate the correlations between the hydrological parameters and the characteristics of seawater intrusion in the coastal aquifer system. The CCFs showed that EC had a close relationship with water pressure and precipitation rather than tide. The CCFs of water pressure with tide and precipitation were in inverse proportion, and the CCF of water pressure with precipitation was larger than that with tide.

  12. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  13. Blocky inversion of multichannel elastic impedance for elastic parameters

    Science.gov (United States)

    Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza

    2018-04-01

    Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.

  14. The three-point function in split dimensional regularization in the Coulomb gauge

    CERN Document Server

    Leibbrandt, G

    1998-01-01

    We use a gauge-invariant regularization procedure, called ``split dimensional regularization'', to evaluate the quark self-energy $\\Sigma (p)$ and quark-quark-gluon vertex function $\\Lambda_\\mu (p^\\prime,p)$ in the Coulomb gauge, $\\vec{\\bigtriangledown}\\cdot\\vec{A}^a = 0$. The technique of split dimensional regularization was designed to regulate Coulomb-gauge Feynman integrals in non-Abelian theories. The technique which is based on two complex regulating parameters, $\\omega$ and $\\sigma$, is shown to generate a well-defined set of Coulomb-gauge integrals. A major component of this project deals with the evaluation of four-propagator and five-propagator Coulomb integrals, some of which are nonlocal. It is further argued that the standard one-loop BRST identity relating $\\Sigma$ and $\\Lambda_\\mu$, should by rights be replaced by a more general BRST identity which contains two additional contributions from ghost vertex diagrams. Despite the appearance of nonlocal Coulomb integrals, both $\\Sigma$ and $\\Lambda_\\...

  15. Dimensional regularization and renormalization of Coulomb gauge quantum electrodynamics

    International Nuclear Information System (INIS)

    Heckathorn, D.

    1979-01-01

    Quantum electrodynamics is renormalized in the Coulomb gauge with covariant counter terms and without momentum-dependent wave-function renormalization constants. It is shown how to dimensionally regularize non-covariant integrals occurring in this guage, and prove that the 'minimal' subtraction prescription excludes non-covariant counter terms. Motivated by the need for a renormalized Coulomb gauge formalism in certain practical calculations, the author introduces a convenient prescription with physical parameters. The renormalization group equations for the Coulomb gauge are derived. (Auth.)

  16. Stark broadening parameter regularities and interpolation and critical evaluation of data for CP star atmospheres research: Stark line shifts

    Science.gov (United States)

    Dimitrijevic, M. S.; Tankosic, D.

    1998-04-01

    In order to find out if regularities and systematic trends found to be apparent among experimental Stark line shifts allow the accurate interpolation of new data and critical evaluation of experimental results, the exceptions to the established regularities are analysed on the basis of critical reviews of experimental data, and reasons for such exceptions are discussed. We found that such exceptions are mostly due to the situations when: (i) the energy gap between atomic energy levels within a supermultiplet is equal or comparable to the energy gap to the nearest perturbing levels; (ii) the most important perturbing level is embedded between the energy levels of the supermultiplet; (iii) the forbidden transitions have influence on Stark line shifts.

  17. Statistical regularities in the rank-citation profile of scientists.

    Science.gov (United States)

    Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

  18. The Social Network of Tracer Variations and O(100) Uncertain Photochemical Parameters in the Community Atmosphere Model

    Science.gov (United States)

    Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.

    2014-12-01

    Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).

  19. Dimensions of the lumbar spinal canal: variations and correlations with somatometric parameters using CT

    International Nuclear Information System (INIS)

    Karantanas, A.H.; Zibis, A.H.; Papaliaga, M.; Georgiou, E.; Rousogiannis, S.

    1998-01-01

    The aim of this study was to investigate the correlation of vertebral dimensions with somatometric parameters in patients without clinical symptoms and radiological signs of central lumbar spinal stenosis. One hundred patients presenting with low back pain or sciatica were studied with CT. In each of the L3, L4 and L5 vertebra three slices were taken with the following measurements: 1. Slice through the intervertebral disc: (a) spinal canal area; (b) interarticular diameter; (c) interligamentous diameter. 2. Slice below the vertebral arcus: (a) dural sac area; (b) vertebral body area. 3. Pediculolaminar level: (a) anteroposterior diameter and interpedicular diameter of the spinal canal; (b) spinal canal area; (c) width of the lateral recesses. The Jones-Thomson index was also estimated. The results of the present study showed that there is a statistically significant correlation of height, weight and age with various vertebral indices. The conventional, widely accepted, anteroposterior diameter of 11.5 mm of the lumbar spinal canal is independent of somatometric parameters, and it is the only constant measurement for the estimation of lumbar spinal stenosis with a single value. The present study suggests that there are variations of the dimensions of the lumbar spinal canal and correlations with height, weight and age of the patient. (orig.)

  20. Dental plaque pH variation with regular soft drink, diet soft drink and high energy drink: an in vivo study.

    Science.gov (United States)

    Jawale, Bhushan Arun; Bendgude, Vikas; Mahuli, Amit V; Dave, Bhavana; Kulkarni, Harshal; Mittal, Simpy

    2012-03-01

    A high incidence of dental caries and dental erosion associated with frequent consumption of soft drinks has been reported. The purpose of this study was to evaluate the pH response of dental plaque to a regular, diet and high energy drink. Twenty subjects were recruited for this study. All subjects were between the ages of 20 and 25 and had at least four restored tooth surfaces present. The subjects were asked to refrain from brushing for 48 hours prior to the study. At baseline, plaque pH was measured from four separate locations using harvesting method. Subjects were asked to swish with 15 ml of the respective soft drink for 1 minute. Plaque pH was measured at the four designated tooth sites at 5, 10 and 20 minutes intervals. Subjects then repeated the experiment using the other two soft drinks. pH was minimum for regular soft drink (2.65 ± 0.026) followed by high energy drink (3.39 ± 0.026) and diet soft drink (3.78 ± 0.006). The maximum drop in plaque pH was seen with regular soft drink followed by high energy drink and diet soft drink. Regular soft drink possesses a greater acid challenge potential on enamel than diet and high energy soft drinks. However, in this clinical trial, the pH associated with either soft drink did not reach the critical pH which is expected for enamel demineralization and dissolution.

  1. Effect of variation of geometric parameters on the flow within a synthetic models of lower human airways

    Science.gov (United States)

    Espinosa Moreno, Andres Santiago; Duque Daza, Carlos Alberto

    2017-11-01

    The effects of variation of two geometric parameters, such as bifurcation angle and carina rounding radius, during the respiratory inhalation process, are studied numerically using two synthetic models of lower human airways. Laminar flow simulations were performed for six angles and three rounding radius, for 500, 1000, 1500 and 2000 for Reynolds numbers. Numerical results showed the existence of a direct relationship between the deformation of the velocity profiles (effect produced by the bifurcation) and the vortical structures observed through the secondary flow patterns. It is observed that the location of the vortices (and their related saddle point) is associated with the displacement of the velocity peak. On the other hand, increasing the angle and the rounding radius seems to bring about a growth of the pressure drop, which in turn displaces the distribution and peaks of the maximum shear stresses of the carina, that is, of the bifurcation point. Some physiological effects associated with the phenomena produced by these geometric variations are also discussed.

  2. Point interactions of the dipole type defined through a three-parametric power regularization

    International Nuclear Information System (INIS)

    Zolotaryuk, A V

    2010-01-01

    A family of point interactions of the dipole type is studied in one dimension using a regularization by rectangles in the form of a barrier and a well separated by a finite distance. The rectangles and the distance are parametrized by a squeezing parameter ε → 0 with three powers μ, ν and τ describing the squeezing rates for the barrier, the well and the distance, respectively. This parametrization allows us to construct a whole family of point potentials of the dipole type including some other point interactions, such as e.g. δ-potentials. Varying the power τ, it is possible to obtain in the zero-range limit the following two cases: (i) the limiting δ'-potential is opaque (the conventional result obtained earlier by some authors) or (ii) this potential admits a resonant tunneling (the opposite result obtained recently by other authors). The structure of resonances (if any) also depends on a regularizing sequence. The sets of the {μ, ν, τ}-space where a non-zero (resonant or non-resonant) transmission occurs are found. For all these cases in the zero-range limit the transfer matrix is shown to be with real parameters χ and g depending on a regularizing sequence. Those cases when χ ≠ 1 and g ≠ 0 mean that the corresponding δ'-potential is accompanied by an effective δ-potential.

  3. Entanglement in coined quantum walks on regular graphs

    International Nuclear Information System (INIS)

    Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L

    2005-01-01

    Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial

  4. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  5. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  6. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  7. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  8. Using Regularization to Infer Cell Line Specificity in Logical Network Models of Signaling Pathways

    Directory of Open Access Journals (Sweden)

    Sébastien De Landtsheer

    2018-05-01

    Full Text Available Understanding the functional properties of cells of different origins is a long-standing challenge of personalized medicine. Especially in cancer, the high heterogeneity observed in patients slows down the development of effective cures. The molecular differences between cell types or between healthy and diseased cellular states are usually determined by the wiring of regulatory networks. Understanding these molecular and cellular differences at the systems level would improve patient stratification and facilitate the design of rational intervention strategies. Models of cellular regulatory networks frequently make weak assumptions about the distribution of model parameters across cell types or patients. These assumptions are usually expressed in the form of regularization of the objective function of the optimization problem. We propose a new method of regularization for network models of signaling pathways based on the local density of the inferred parameter values within the parameter space. Our method reduces the complexity of models by creating groups of cell line-specific parameters which can then be optimized together. We demonstrate the use of our method by recovering the correct topology and inferring accurate values of the parameters of a small synthetic model. To show the value of our method in a realistic setting, we re-analyze a recently published phosphoproteomic dataset from a panel of 14 colon cancer cell lines. We conclude that our method efficiently reduces model complexity and helps recovering context-specific regulatory information.

  9. Fast magnetic resonance imaging based on high degree total variation

    Science.gov (United States)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  10. Study on variation in ship's forward speed under regular waves depending on rudder controller

    Directory of Open Access Journals (Sweden)

    Sung-Soo Kim

    2015-03-01

    Full Text Available The purpose of this research is to compare and analyze the advanced speed of ships with different rudder controller in wavy condition by using a simulation. The commercial simulation tool named AQWA is used to develop the simulation of ship which has 3 degree of freedom. The nonlinear hydrodynamic force acting on hull, the propeller thrust and the rudder force are calculated by the additional subroutine which interlock with the commercial simulation tool, and the regular wave is used as the source of the external force for the simulation. Rudder rotational velocity and autopilot coefficients vary to make the different rudder controller. An advanced speed of ships depending on the rudder controller is analyzed after the autopilot simulations.

  11. Short-time regularity assessment of fibrillatory waves from the surface ECG in atrial fibrillation

    International Nuclear Information System (INIS)

    Alcaraz, Raúl; Martínez, Arturo; Hornero, Fernando; Rieta, José J

    2012-01-01

    This paper proposes the first non-invasive method for direct and short-time regularity quantification of atrial fibrillatory (f) waves from the surface ECG in atrial fibrillation (AF). Regularity is estimated by computing individual morphological variations among f waves, which are delineated and extracted from the atrial activity (AA) signal, making use of an adaptive signed correlation index. The algorithm was tested on real AF surface recordings in order to discriminate atrial signals with different organization degrees, providing a notably higher global accuracy (90.3%) than the two non-invasive AF organization estimates defined to date: the dominant atrial frequency (70.5%) and sample entropy (76.1%). Furthermore, due to its ability to assess AA regularity wave to wave, the proposed method is also able to pursue AF organization time course more precisely than the aforementioned indices. As a consequence, this work opens a new perspective in the non-invasive analysis of AF, such as the individualized study of each f wave, that could improve the understanding of AF mechanisms and become useful for its clinical treatment. (paper)

  12. Substructural Regularization With Data-Sensitive Granularity for Sequence Transfer Learning.

    Science.gov (United States)

    Sun, Shichang; Liu, Hongbo; Meng, Jiana; Chen, C L Philip; Yang, Yu

    2018-06-01

    Sequence transfer learning is of interest in both academia and industry with the emergence of numerous new text domains from Twitter and other social media tools. In this paper, we put forward the data-sensitive granularity for transfer learning, and then, a novel substructural regularization transfer learning model (STLM) is proposed to preserve target domain features at substructural granularity in the light of the condition of labeled data set size. Our model is underpinned by hidden Markov model and regularization theory, where the substructural representation can be integrated as a penalty after measuring the dissimilarity of substructures between target domain and STLM with relative entropy. STLM can achieve the competing goals of preserving the target domain substructure and utilizing the observations from both the target and source domains simultaneously. The estimation of STLM is very efficient since an analytical solution can be derived as a necessary and sufficient condition. The relative usability of substructures to act as regularization parameters and the time complexity of STLM are also analyzed and discussed. Comprehensive experiments of part-of-speech tagging with both Brown and Twitter corpora fully justify that our model can make improvements on all the combinations of source and target domains.

  13. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  14. Diurnal variation of hematology parameters in healthy young males: the Bispebjerg study of diurnal variations

    DEFF Research Database (Denmark)

    Sennels, Henriette P; Jørgensen, Henrik L; Hansen, Anne-Louise S

    2011-01-01

    To evaluate the influence of time of day on the circulating concentrations of 21 hematology parameters.......To evaluate the influence of time of day on the circulating concentrations of 21 hematology parameters....

  15. Singular tachyon kinks from regular profiles

    International Nuclear Information System (INIS)

    Copeland, E.J.; Saffin, P.M.; Steer, D.A.

    2003-01-01

    We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately

  16. Seasonal variation of meteorological factors on air parameters and ...

    African Journals Online (AJOL)

    The impacts of gas flaring on meteorological factors at Ibeno, Eket, Onna, Esit Eket and Umudike - Nigeria were investigated by measuring air quality parameters. The results show that the mean concentration of air parameters value were below Federal Environmental Protection Agency (FEPA) and United States ...

  17. Partial differential equations and calculus of variations

    CERN Document Server

    Leis, Rolf

    1988-01-01

    This volume contains 18 invited papers by members and guests of the former Sonderforschungsbereich in Bonn (SFB 72) who, over the years, collaborated on the research group "Solution of PDE's and Calculus of Variations". The emphasis is on existence and regularity results, on special equations of mathematical physics and on scattering theory.

  18. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  19. Bayesian estimation of regularization parameters for deformable surface models

    International Nuclear Information System (INIS)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-01-01

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames

  20. Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction

    International Nuclear Information System (INIS)

    Hintermüller, Michael; Rautenberg, Carlos N; Hahn, Jooyoung

    2014-01-01

    Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown. (paper)

  1. Homotopic non-local regularized reconstruction from sparse positron emission tomography measurements

    International Nuclear Information System (INIS)

    Wong, Alexander; Liu, Chenyi; Wang, Xiao Yu; Fieguth, Paul; Bie, Hongxia

    2015-01-01

    Positron emission tomography scanners collect measurements of a patient’s in vivo radiotracer distribution. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule, and the tomograms must be reconstructed from projections. The reconstruction of tomograms from the acquired PET data is an inverse problem that requires regularization. The use of tightly packed discrete detector rings, although improves signal-to-noise ratio, are often associated with high costs of positron emission tomography systems. Thus a sparse reconstruction, which would be capable of overcoming the noise effect while allowing for a reduced number of detectors, would have a great deal to offer. In this study, we introduce and investigate the potential of a homotopic non-local regularization reconstruction framework for effectively reconstructing positron emission tomograms from such sparse measurements. Results obtained using the proposed approach are compared with traditional filtered back-projection as well as expectation maximization reconstruction with total variation regularization. A new reconstruction method was developed for the purpose of improving the quality of positron emission tomography reconstruction from sparse measurements. We illustrate that promising reconstruction performance can be achieved for the proposed approach even at low sampling fractions, which allows for the use of significantly fewer detectors and have the potential to reduce scanner costs

  2. Study on the effect of hydrogen addition on the variation of plasma parameters of argon-oxygen magnetron glow discharge for synthesis of TiO2 films

    Directory of Open Access Journals (Sweden)

    Partha Saikia

    2016-04-01

    Full Text Available We report the effect of hydrogen addition on plasma parameters of argon-oxygen magnetron glow discharge plasma in the synthesis of H-doped TiO2 films. The parameters of the hydrogen-added Ar/O2 plasma influence the properties and the structural phases of the deposited TiO2 film. Therefore, the variation of plasma parameters such as electron temperature (Te, electron density (ne, ion density (ni, degree of ionization of Ar and degree of dissociation of H2 as a function of hydrogen content in the discharge is studied. Langmuir probe and Optical emission spectroscopy are used to characterize the plasma. On the basis of the different reactions in the gas phase of the magnetron discharge, the variation of plasma parameters and sputtering rate are explained. It is observed that the electron and heavy ion density decline with gradual addition of hydrogen in the discharge. Hydrogen addition significantly changes the degree of ionization of Ar which influences the structural phases of the TiO2 film.

  3. Volume variation of Gruneisen parameters of fcc transition metals

    Indian Academy of Sciences (India)

    Unknown

    average discrepancy between the values of γ measured by various methods for 23 metals. Experimentally only the total Gruneisen parameter can be measured. The total. Gruneisen parameter is the sum of lattice, electronic and probably magnetic contribution. The letter term is present in palladium (White and Pawlok 1970) ...

  4. Dimensions of the lumbar spinal canal: variations and correlations with somatometric parameters using CT

    Energy Technology Data Exchange (ETDEWEB)

    Karantanas, A.H. [Department of CT-MRI, Larissa General Hospital (Greece); Zibis, A.H.; Papaliaga, M.; Georgiou, E.; Rousogiannis, S. [Larissa Medical School, University of Thessaly, Larissa (Greece)

    1998-12-01

    The aim of this study was to investigate the correlation of vertebral dimensions with somatometric parameters in patients without clinical symptoms and radiological signs of central lumbar spinal stenosis. One hundred patients presenting with low back pain or sciatica were studied with CT. In each of the L3, L4 and L5 vertebra three slices were taken with the following measurements: 1. Slice through the intervertebral disc: (a) spinal canal area; (b) interarticular diameter; (c) interligamentous diameter. 2. Slice below the vertebral arcus: (a) dural sac area; (b) vertebral body area. 3. Pediculolaminar level: (a) anteroposterior diameter and interpedicular diameter of the spinal canal; (b) spinal canal area; (c) width of the lateral recesses. The Jones-Thomson index was also estimated. The results of the present study showed that there is a statistically significant correlation of height, weight and age with various vertebral indices. The conventional, widely accepted, anteroposterior diameter of 11.5 mm of the lumbar spinal canal is independent of somatometric parameters, and it is the only constant measurement for the estimation of lumbar spinal stenosis with a single value. The present study suggests that there are variations of the dimensions of the lumbar spinal canal and correlations with height, weight and age of the patient. (orig.) With 1 fig., 6 tabs., 24 refs.

  5. Regularized Biot–Savart Laws for Modeling Magnetic Flux Ropes

    Science.gov (United States)

    Titov, Viacheslav S.; Downs, Cooper; Mikić, Zoran; Török, Tibor; Linker, Jon A.; Caplan, Ronald M.

    2018-01-01

    Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the morphology of the pre-eruptive source region. For this purpose, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and circular cross-sections. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is the curl of the sum of axial and azimuthal vector potentials proportional to I and F, respectively. We expressed the vector potentials in terms of modified Biot–Savart laws, whose kernels are regularized at the axis in such a way that, when the axis is straight, these laws define a cylindrical force-free flux rope with a parabolic profile for the axial current density. For the cases we have studied so far, we determined the shape of the rope axis by following the polarity inversion line of the eruptions’ source region, using observed magnetograms. The height variation along the axis and other flux-rope parameters are estimated by means of potential-field extrapolations. Using this heuristic approach, we were able to construct pre-eruption configurations for the 2009 February 13 and 2011 October 1 CME events. These applications demonstrate the flexibility and efficiency of our new method for energizing pre-eruptive configurations in simulations of CMEs.

  6. Utilitarian cycling in Belgium: a cross-sectional study in a sample of regular cyclists.

    OpenAIRE

    de Geus, B.; Degraeuwe, B.; Vandenbulcke, G.; INT PANIS, Luc; Thomas, I.; Aertsens, Joris; De Weerdt, Y.; Torfs, R.; Meeusen, R.

    2014-01-01

    Background: For an accurate estimation of health benefits and hazards of utilitarian cycling, a prospective collection of bicycle usage data (exposure) is fundamental. Individual and environmental correlates are necessary to guide health promotion and traffic safety issues. Firstly, this study aims to report on utilitarian bicycle usage in Belgium, using a prospective data collection in regular adult commuter cyclists. Secondly, the association is explored between the individual variation in ...

  7. Estimation of G-renewal process parameters as an ill-posed inverse problem

    International Nuclear Information System (INIS)

    Krivtsov, V.; Yevkin, O.

    2013-01-01

    Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

  8. DESIGN OF STRUCTURAL ELEMENTS IN THE EVENT OF THE PRE-SET RELIABILITY, REGULAR LOAD AND BEARING CAPACITY DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Tamrazyan Ashot Georgievich

    2012-10-01

    Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.

  9. Gravitational Quasinormal Modes of Regular Phantom Black Hole

    Directory of Open Access Journals (Sweden)

    Jin Li

    2017-01-01

    Full Text Available We investigate the gravitational quasinormal modes (QNMs for a type of regular black hole (BH known as phantom BH, which is a static self-gravitating solution of a minimally coupled phantom scalar field with a potential. The studies are carried out for three different spacetimes: asymptotically flat, de Sitter (dS, and anti-de Sitter (AdS. In order to consider the standard odd parity and even parity of gravitational perturbations, the corresponding master equations are derived. The QNMs are discussed by evaluating the temporal evolution of the perturbation field which, in turn, provides direct information on the stability of BH spacetime. It is found that in asymptotically flat, dS, and AdS spacetimes the gravitational perturbations have similar characteristics for both odd and even parities. The decay rate of perturbation is strongly dependent on the scale parameter b, which measures the coupling strength between phantom scalar field and the gravity. Furthermore, through the analysis of Hawking radiation, it is shown that the thermodynamics of such regular phantom BH is also influenced by b. The obtained results might shed some light on the quantum interpretation of QNM perturbation.

  10. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    Science.gov (United States)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  11. Effects of regular physical activity on anthropometric and functional parameters in young and old women

    Directory of Open Access Journals (Sweden)

    Sebastião Gobbi

    2001-12-01

    Full Text Available The aim of the present study was to verify the strength levels and the arm cross-sectional-area (AMB of young and old women who practice physical activity regularly. Thirty female subjects were selected and distributed into two groups: young (G1 and old (G2. They were evaluated on maximal voluntary strength of elbow flexor muscle by 1RM (one-maximal repetition test (“biceps curl”, and the AMB, through the measures of arm circumference (CB and triceps skinfold (DCTr, which were then used with the equation proposed by Frisancho (1984: AMB (cm2 = [(CB - pDCTr2 / 4p] - 6.5, in both dominant (MD and non dominant (MND arms. Strength (kg and AMB (cm2 were analyzed by ANOVA to a significance level of 5%. The G2 strength level was significantly (p RESUMO O presente estudo teve como objetivo verificar os níveis de força e a área muscular do braço (AMB de mulheres jovens e idosas praticantes de atividade física regular. Para isso foram selecionados 30 sujeitos do sexo feminino, distribuídos em dois grupos: jovem (G1 e idoso (G2. Avaliou-se a força voluntária máxima dos músculos flexores do cotovelo pelo teste de 1RM (repetição máxima exercício “rosca unilateral” , e a AMB, através das medidas de circunferência de braço (CB e dobra cutânea tricipital (DCTr, que posteriormente foram incluídas na equação proposta por Frisancho (1984: AMB (cm = [(CB - pDCTr2 / 4p] - 6.5, em ambos os membros dominante (MD e não dominante (MND. Os valores de força (em kg e AMB (em cm2 foram analisados por ANOVA com nível de significância pré-estabelecido em 5%. O nível de força do G2 foi significativamente (p<0,01 menor que G1, tanto no MD quanto no MND. Em relação à AMB, o MND de G2 mostrou-se maior que G1, o que não ocorreu com o MD. A partir da análise dos resultados, concluiu-se que apesar do envelhecimento, a prática regular de atividade física pode prevenir a perda de massa muscular (MM. Contudo, a capacidade para gerar for

  12. Dose domain regularization of MLC leaf patterns for highly complex IMRT plans

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-04-15

    Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and

  13. Harmonic R-matrices for scattering amplitudes and spectral regularization

    Energy Technology Data Exchange (ETDEWEB)

    Ferro, Livia; Plefka, Jan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Lukowski, Tomasz [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Univ. Berlin (Germany). IRIS Adlershof; Meneghelli, Carlo [Hamburg Univ. (Germany). Fachbereich 11 - Mathematik; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Staudacher, Matthias [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany)

    2012-12-15

    Planar N=4 super Yang-Mills appears to be integrable. While this allows to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R-matrix of the integrable N=4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R-matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of non-integrable four-dimensional field theories.

  14. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.

    2014-01-19

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  15. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon

    2014-01-01

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  16. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  17. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-05-01

    Full Text Available Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN, where the observed noisy image f is related to the underlying true image u by f=u+η and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.

  18. Reconstruction of constitutive parameters in isotropic linear elasticity from noisy full-field measurements

    International Nuclear Information System (INIS)

    Bal, Guillaume; Bellis, Cédric; Imperiale, Sébastien; Monard, François

    2014-01-01

    Within the framework of linear elasticity we assume the availability of internal full-field measurements of the continuum deformations of a non-homogeneous isotropic solid. The aim is the quantitative reconstruction of the associated moduli. A simple gradient system for the sought constitutive parameters is derived algebraically from the momentum equation, whose coefficients are expressed in terms of the measured displacement fields and their spatial derivatives. Direct integration of this system is discussed to finally demonstrate the inexpediency of such an approach when dealing with noisy data. Upon using polluted measurements, an alternative variational formulation is deployed to invert for the physical parameters. Analysis of this latter inversion procedure provides existence and uniqueness results while the reconstruction stability with respect to the measurements is investigated. As the inversion procedure requires differentiating the measurements twice, a numerical differentiation scheme based on an ad hoc regularization then allows an optimally stable reconstruction of the sought moduli. Numerical results are included to illustrate and assess the performance of the overall approach. (paper)

  19. More on zeta-function regularization of high-temperature expansions

    International Nuclear Information System (INIS)

    Actor, A.

    1987-01-01

    A recent paper using the Riemann ζ-function to regularize the (divergent) coefficients occurring in the high-temperature expansions of one-loop thermodynamic potentials is extended. This method proves to be a powerful tool for converting Dirichlet-type series Σ m a m (x i )/m s into power series in the dimensionless parameters x i . The coefficients occurring in the power series are (proportional to) ζ-functions evaluated away from their poles - this is where the regularization occurs. High-temperature expansions are just one example of this highly-nontrivial rearrangement of Dirichlet series into power series form. We discuss in considerable detail series in which a m (x i ) is a product of trigonometric, algebraic and Bessel function factors. The ζ-function method is carefully explained, and a large number of new formulae are provided. The means to generalize these formulae are also provided. Previous results on thermodynamic potentials are generalized to include a nonzero constant term in the gauge potential (time component) which can be used to probe the electric sector of temperature gauge theories. (author)

  20. Exploring natural variation of photosynthetic, primary metabolism and growth parameters in a large panel of Capsicum chinense accessions.

    Science.gov (United States)

    Rosado-Souza, Laise; Scossa, Federico; Chaves, Izabel S; Kleessen, Sabrina; Salvador, Luiz F D; Milagre, Jocimar C; Finger, Fernando; Bhering, Leonardo L; Sulpice, Ronan; Araújo, Wagner L; Nikoloski, Zoran; Fernie, Alisdair R; Nunes-Nesi, Adriano

    2015-09-01

    Collectively, the results presented improve upon the utility of an important genetic resource and attest to a complex genetic basis for differences in both leaf metabolism and fruit morphology between natural populations. Diversity of accessions within the same species provides an alternative method to identify physiological and metabolic traits that have large effects on growth regulation, biomass and fruit production. Here, we investigated physiological and metabolic traits as well as parameters related to plant growth and fruit production of 49 phenotypically diverse pepper accessions of Capsicum chinense grown ex situ under controlled conditions. Although single-trait analysis identified up to seven distinct groups of accessions, working with the whole data set by multivariate analyses allowed the separation of the 49 accessions in three clusters. Using all 23 measured parameters and data from the geographic origin for these accessions, positive correlations between the combined phenotypes and geographic origin were observed, supporting a robust pattern of isolation-by-distance. In addition, we found that fruit set was positively correlated with photosynthesis-related parameters, which, however, do not explain alone the differences in accession susceptibility to fruit abortion. Our results demonstrated that, although the accessions belong to the same species, they exhibit considerable natural intraspecific variation with respect to physiological and metabolic parameters, presenting diverse adaptation mechanisms and being a highly interesting source of information for plant breeders. This study also represents the first study combining photosynthetic, primary metabolism and growth parameters for Capsicum to date.

  1. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  2. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  3. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  4. Noninvasive technique for measurement of heartbeat regularity in zebrafish (Danio rerio embryos

    Directory of Open Access Journals (Sweden)

    Cheng Shuk

    2009-02-01

    Full Text Available Abstract Background Zebrafish (Danio rerio, due to its optical accessibility and similarity to human, has emerged as model organism for cardiac research. Although various methods have been developed to assess cardiac functions in zebrafish embryos, there lacks a method to assess heartbeat regularity in blood vessels. Heartbeat regularity is an important parameter for cardiac function and is associated with cardiotoxicity in human being. Using stereomicroscope and digital video camera, we have developed a simple, noninvasive method to measure the heart rate and heartbeat regularity in peripheral blood vessels. Anesthetized embryos were mounted laterally in agarose on a slide and the caudal blood circulation of zebrafish embryo was video-recorded under stereomicroscope and the data was analyzed by custom-made software. The heart rate was determined by digital motion analysis and power spectral analysis through extraction of frequency characteristics of the cardiac rhythm. The heartbeat regularity, defined as the rhythmicity index, was determined by short-time Fourier Transform analysis. Results The heart rate measured by this noninvasive method in zebrafish embryos at 52 hour post-fertilization was similar to that determined by direct visual counting of ventricle beating (p > 0.05. In addition, the method was validated by a known cardiotoxic drug, terfenadine, which affects heartbeat regularity in humans and induces bradycardia and atrioventricular blockage in zebrafish. A significant decrease in heart rate was found by our method in treated embryos (p p Conclusion The data support and validate this rapid, simple, noninvasive method, which includes video image analysis and frequency analysis. This method is capable of measuring the heart rate and heartbeat regularity simultaneously via the analysis of caudal blood flow in zebrafish embryos. With the advantages of rapid sample preparation procedures, automatic image analysis and data analysis, this

  5. Can Simple Soil Parameters Explain Field-Scale Variations in Glyphosate-, Bromoxyniloctanoate-, Diflufenican-, and Bentazone Mineralization?

    DEFF Research Database (Denmark)

    Norgaard, Trine; de Jonge, Lis Wollesen; Møldrup, Per

    2015-01-01

    The large spatial heterogeneity in soil physico-chemical and microbial parameters challenges our ability to predict and model pesticide leaching from agricultural land. Microbial mineralization of pesticides is an important process with respect to pesticide leaching since mineralization...... is the major process for the complete degradation of pesticides without generation of metabolites. The aim of our study was to determine field-scale variation in the potential for mineralization of the herbicides glyphosate, bromoxyniloctanoate, diflufenican, and bentazone and to investigate whether....... The mineralization potentials for glyphosate and bentazone were compared with 9-years leaching data from two horizontal wells 3.5 m below the field. The field-scale leaching patterns, however, could not be explained by the pesticide mineralization data. Instead, field-scale pesticide leaching may have been governed...

  6. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  7. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  8. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  9. Variation in soil physical, chemical and microbial parameters under different land uses in bagrot valley, gilgit, pakistan

    International Nuclear Information System (INIS)

    Ali, S.

    2017-01-01

    Soil degradation due to unsustainable land use is a global problem and the biggest challenge for sustainability in mountain areas due to their ecological and socio-economic impacts. The study aims to evaluate the variation in the physical, chemical and microbial parameters of soil across various land uses in the Bagrot valley, Central Karakoram National Park (CKNP), Gilgit-Baltistan. Soil samples from 0-20 cm were collected from three land uses such as arable land, pasture, and adjacently located forest. The variables investigated were soil bulk density, total porosity, saturation percentage, sand, silt, clay, pH, electric conductivity, CaCO/sub 3/, organic matter, TN, available P, K, Fe, Mn, Cu and Zn and microbial parameters (16SrRNA and ITS copies number and fungi-to-bacterial ratio). A sigificant varriation in all parameters were found accross the land uses (ANOVA, p < 0.01). Similarly, the highest bulk density, sand, pH, EC, CaCO/sub 3/ were found in arable land, with the lowest values in forest. In contrast, soil under forest showed a higher total porosity, percent saturation, clay, OM, macro and micronutrients, microbial abundance and fungi-to-bacterial ratio than for other land uses. The differences in soil parameters across the land uses indicated detrimental impacts of agricultural activities on soil health. Soil pH and organic matter are the main controlling factors for microbial indicators as well as physical and chemical parameters. The results suggest that restoration of natural vegetation in degraded land and decrease in intensity of land use could improve soil properties in the study area, as well as other similar mountainous regions. (author)

  10. Short-time variations of the ground water level

    International Nuclear Information System (INIS)

    Nilsson, Lars Y.

    1977-09-01

    Investigations have demonstrated that the ground water level of aquifers in the Swedish bedrock shows shorttime variations without changing their water content. The ground water level is among other things affected by regular tidal movements occuring in the ''solid'' crust of the earth variations in the atmospheric pressure strong earthquakes occuring in different parts of the world These effects proves that the system of fissures in the bedrock are not stable and that the ground water flow is influenced by both water- and airfilled fissures

  11. SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction

    International Nuclear Information System (INIS)

    Lu, W; Yan, H; Gu, X; Jiang, S; Jia, X; Bai, T; Zhou, L

    2014-01-01

    Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Three different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)

  12. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  13. Finite Element Quadrature of Regularized Discontinuous and Singular Level Set Functions in 3D Problems

    Directory of Open Access Journals (Sweden)

    Nicola Ponara

    2012-11-01

    Full Text Available Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the singular fields is often carried out. In particular, weight functions of the signed distance with respect to the discontinuity interface are exploited. Tornberg and Engquist (Journal of Scientific Computing, 2003, 19: 527–552 proved that the use of compact support weight function is not suitable because it leads to errors that do not vanish for decreasing mesh size. They proposed the adoption of non-compact support weight functions. In the present contribution, the relationship between the Fourier transform of the weight functions and the accuracy of the regularization procedure is exploited. The proposed regularized approach was implemented in the eXtended Finite Element Method. As a three-dimensional example, we study a slender solid characterized by an inclined interface across which the displacement is discontinuous. The accuracy is evaluated for varying position of the discontinuity interfaces with respect to the underlying mesh. A procedure for the choice of the regularization parameters is proposed.

  14. Elastic-plastic stresses in a thin rotating disk with shafthaving density variation parameter under steady-state temperature

    Directory of Open Access Journals (Sweden)

    Pankaj Thakur

    2014-01-01

    Full Text Available Steady thermal stresses in a rotating disc with shaft having density variation parameter subjected to thermal load have been derived by using Seth's transition theory. Neither the yields criterion nor the associated flow rule is assumed here. Results are depicted graphically. It has been seen that compressible material required higher percentage increased angular speed to become fully-plastic as compare to rotating disc made of incompressible material. Circumferential stresses are maximal at the outer surface of the rotating disc. With the introduction of thermal effect it decreases the value of radial and circumferential stresses at inner and outer surface for fully-plastic state.

  15. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  16. Spatial and temporal variations of small-scale plasma turbulence parameters in the equatorial electrojet: HF and VHF radar observational results

    Directory of Open Access Journals (Sweden)

    G. Manju

    2005-06-01

    Full Text Available The spatial and temporal variations of various parameters associated with plasma wave turbulence in the equatorial electrojet (EEJ at the magnetic equatorial location of Trivandrum (8.5° N, 77° E; dip 0.5° N are studied for the first time, using co-located HF (18MHz and VHF (54.95MHz coherent backscatter radar observations (daytime in the altitude region of 95-110km, mostly on magnetically quiet days. The derived turbulence parameters are the mean electron density irregularity strength (δn/n, anomalous electron collision frequency (νe* and the corrected east-west electron drift velocity (Vey. The validity of the derived parameters is confirmed using radar data at two different frequencies and comparing with in-situ measurements. The behaviour of δn/n in relation to the backscattered power during weak and strong EEJ conditions is also examined to understand the growth and evolution of turbulence in the electrojet.

  17. Spatial and temporal variations of small-scale plasma turbulence parameters in the equatorial electrojet: HF and VHF radar observational results

    Directory of Open Access Journals (Sweden)

    G. Manju

    2005-06-01

    Full Text Available The spatial and temporal variations of various parameters associated with plasma wave turbulence in the equatorial electrojet (EEJ at the magnetic equatorial location of Trivandrum (8.5° N, 77° E; dip 0.5° N are studied for the first time, using co-located HF (18MHz and VHF (54.95MHz coherent backscatter radar observations (daytime in the altitude region of 95-110km, mostly on magnetically quiet days. The derived turbulence parameters are the mean electron density irregularity strength (δn/n, anomalous electron collision frequency (νe* and the corrected east-west electron drift velocity (Vey. The validity of the derived parameters is confirmed using radar data at two different frequencies and comparing with in-situ measurements. The behaviour of δn/n in relation to the backscattered power during weak and strong EEJ conditions is also examined to understand the growth and evolution of turbulence in the electrojet.

  18. A robust probabilistic approach for variational inversion in shallow water acoustic tomography

    International Nuclear Information System (INIS)

    Berrada, M; Badran, F; Crépon, M; Thiria, S; Hermand, J-P

    2009-01-01

    This paper presents a variational methodology for inverting shallow water acoustic tomography (SWAT) measurements. The aim is to determine the vertical profile of the speed of sound c(z), knowing the acoustic pressures generated by a frequency source and collected by a sparse vertical hydrophone array (VRA). A variational approach that minimizes a cost function measuring the distance between observations and their modeled equivalents is used. A regularization term in the form of a quadratic restoring term to a background is also added. To avoid inverting the variance–covariance matrix associated with the above-weighted quadratic background, this work proposes to model the sound speed vector using probabilistic principal component analysis (PPCA). The PPCA introduces an optimum reduced number of non-correlated latent variables η, which determine a new control vector and a new regularization term, expressed as η T η. The PPCA represents a rigorous formalism for the use of a priori information and allows an efficient implementation of the variational inverse method

  19. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  20. Regularities of radiorace formation in yeasts. Comm.8. The role played by heterozygosis of diploid yeasts in radiorace formation

    International Nuclear Information System (INIS)

    Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Kondrat'eva, V.I.

    1976-01-01

    Tow strains of diploid yeasts, namely, high-homozygous 5x3B Saccharomyces cerevisiae and natural heterozygous Mergi 139-B Saccharomyces ellipsoideus, have been used to study the regularities of formation of new races under the action of ionizing radiation. It has been shown that the degree of heterozygosis of both strains does not substantially affect either the quantitative regularities of radiorace formation or the qualitative variations in the new-formed races. The differences between the strains in yielding new races after γ-irradiation with doses similar in biological effectiveness may be explained by different extrapolation numbers of their survival curves

  1. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  2. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    Science.gov (United States)

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  3. A Semismooth Newton Method for Nonlinear Parameter Identification Problems with Impulsive Noise

    KAUST Repository

    Clason, Christian; Jin, Bangti

    2012-01-01

    -order condition. The convergence of the solution to the approximating problem as the smoothing parameter goes to zero is shown. A strategy for adaptively selecting the regularization parameter based on a balancing principle is suggested. The efficiency

  4. Optimization of process parameter variations on leakage current in in silicon-oninsulator vertical double gate mosfet device

    Directory of Open Access Journals (Sweden)

    K.E. Kaharudin

    2015-12-01

    Full Text Available This paper presents a study of optimizing input process parameters on leakage current (IOFF in silicon-on-insulator (SOI Vertical Double-Gate,Metal Oxide Field-Effect-Transistor (MOSFET by using L36 Taguchi method. The performance of SOI Vertical DG-MOSFET device is evaluated in terms of its lowest leakage current (IOFF value. An orthogonal array, main effects, signal-to-noise ratio (SNR and analysis of variance (ANOVA are utilized in order to analyze the effect of input process parameter variation on leakage current (IOFF. Based on the results, the minimum leakage current ((IOFF of SOI Vertical DG-MOSFET is observed to be 0.009 nA/µm or 9 ρA/µm while keeping the drive current (ION value at 434 µA/µm. Both the drive current (ION and leakage current (IOFF values yield a higher ION/IOFF ratio (48.22 x 106 for low power consumption application. Meanwhile, polysilicon doping tilt angle and polysilicon doping energy are recognized as the most dominant factors with each of the contributing factor effects percentage of 59% and 25%.

  5. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  6. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  7. Influence of whitening and regular dentifrices on orthodontic clear ligature color stability.

    Science.gov (United States)

    Oliveira, Adauê S; Kaizer, Marina R; Salgado, Vinícius E; Soldati, Dener C; Silva, Roberta C; Moraes, Rafael R

    2015-01-01

    This study evaluated the effect of brushing orthodontic clear ligatures with a whitening dentifrice containing a blue pigment (Close Up White Now, Unilever, London, UK) on their color stability, when exposed to a staining agent. Ligatures from 3M Unitek (Monrovia, CA, USA) and Morelli (Sorocaba, SP, Brazil) were tested. Baseline color measurements were performed and nonstained groups (control) were stored in distilled water whereas test groups were exposed for 1 hour daily to red wine. Specimens were brushed daily using regular or whitening dentifrice. Color measurements were repeated after 7, 14, 21, and 28 days using a spectrophotometer based on the CIE L*a*b* system. Decreased luminosity (CIE L*), increased red discoloration (CIE a* axis), and increased yellow discoloration (CIE b* axis) were generally observed for ligatures exposed to the staining agent. Color variation was generally lower in specimens brushed with regular dentifrice, but ligatures brushed with whitening dentifrice were generally less red and less yellow than regular dentifrice. The whitening dentifrice led to blue discoloration trend, with visually detectable differences particularly apparent according to storage condition and ligature brand. The whitening dentifrice containing blue pigment did not improve the ligature color stability, but it decreased yellow discoloration and increased a blue coloration. The use of a whitening dentifrice containing blue pigment during orthodontic treatment might decrease the yellow discoloration of elastic ligatures. © 2015 Wiley Periodicals, Inc.

  8. Influence of the volume ratio of solid phase on carrying capacity of regular porous structure

    Directory of Open Access Journals (Sweden)

    Monkova Katarina

    2017-01-01

    Full Text Available Direct metal laser sintering is spread technology today. The main advantage of this method is the ability to produce parts which have a very complex geometry and which can be produced only in very complicated way by classical conventional methods. Special category of such components are parts with porous structure, which can give to the product extraordinary combination of properties. The article deals with some aspects that influence the manufacturing of regular porous structures in spite of the fact that input technological parameters at various samples were the same. The main goal of presented research has been to investigate the influence of the volume ratio of solid phase on carrying capacity of regular porous structure. Realized tests have indicated that the unit of regular porous structure with lower volume ratio is able to carry a greater load to failure than the unit with higher volume ratio.

  9. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  10. Solar ultraviolet irradiance variations: a review

    International Nuclear Information System (INIS)

    Lean, J.

    1987-01-01

    Despite the geophysical importance of solar ultraviolet radiation, specific aspects of its temporal variations have not yet been adequately determined experimentally, nor are the mechanisms for the variability completely understood. Satellite observations have verified the reality of solar ultraviolet irradiance variations over time scales of days and months, and model calculations have confirmed the association of these short-term variations with the evolution and rotation of regions of enhanced magnetic activity on the solar disc. However, neither rocket nor satellite measurements have yet been made with sufficient accuracy and regularity to establish unequivocally the nature of the variability over the longer time of the 11-year solar cycle. The comparative importance for the long-term variations of local regions of enhanced magnetic activity and global scale activity perturbations is still being investigated. Solar ultraviolet irradiance variations over both short and long time scales are reviewed, with emphasis on their connection to solar magnetic activity. Correlations with ground-based measures of solar variability are examined because of the importance of the ground-based observations as historical proxies of ultraviolet irradiance variations. Current problems in understanding solar ultraviolet irradiance variations are discussed, and the measurements planned for solar cycle 22, which may resolve these problems, are briefly described. copyright American Geophysical Union 1987

  11. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  12. Stark width regularities within spectral series of the lithium isoelectronic sequence

    Science.gov (United States)

    Tapalaga, Irinel; Trklja, Nora; Dojčinović, Ivan P.; Purić, Jagoš

    2018-03-01

    Stark width regularities within spectral series of the lithium isoelectronic sequence have been studied in an approach that includes both neutrals and ions. The influence of environmental conditions and certain atomic parameters on the Stark widths of spectral lines has been investigated. This study gives a simple model for the calculation of Stark broadening data for spectral lines within the lithium isoelectronic sequence. The proposed model requires fewer parameters than any other model. The obtained relations were used for predictions of Stark widths for transitions that have not yet been measured or calculated. In the framework of the present research, three algorithms for fast data processing have been made and they enable quality control and provide verification of the theoretically calculated results.

  13. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  14. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  15. Trace formulae and spectral statistics for discrete Laplacians on regular graphs (I)

    Energy Technology Data Exchange (ETDEWEB)

    Oren, Idan; Godel, Amit; Smilansky, Uzy [Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100 (Israel)], E-mail: idan.oren@weizmann.ac.il, E-mail: amit.godel@weizmann.ac.il, E-mail: uzy.smilansky@weizmann.ac.il

    2009-10-16

    Trace formulae for d-regular graphs are derived and used to express the spectral density in terms of the periodic walks on the graphs under consideration. The trace formulae depend on a parameter w which can be tuned continuously to assign different weights to different periodic orbit contributions. At the special value w = 1, the only periodic orbits which contribute are the non-back-scattering orbits, and the smooth part in the trace formula coincides with the Kesten-McKay expression. As w deviates from unity, non-vanishing weights are assigned to the periodic walks with backscatter, and the smooth part is modified in a consistent way. The trace formulae presented here are the tools to be used in the second paper in this sequence, for showing the connection between the spectral properties of d-regular graphs and the theory of random matrices.

  16. MR angiography of stenosis and aneurysm models in the pulsatile flow: variation with imaging parameters and concentration of contrast media

    International Nuclear Information System (INIS)

    Park, Kyung Joo; Park, Jae Hyung; Lee, Hak Jong; Won, Hyung Jin; Lee, Dong Hyuk; Min, Byung Goo; Chang, Kee Hyun

    1997-01-01

    The image quality of magnetic resonance angiography (MRA) varies according to the imaging techniques applied and the parameters affected by blood flow patterns, as well as by the shape of the blood vessels. This study was designed to assess the influence on signal intensity and its distribution of the geometry of these vessels, the imaging parameters, and the concentration of contrast media in MRA of stenosis and aneurysm models. MRA was performed in stenosis and aneurysm models made of glass tubes, using pulsatile flow with viscosity and flow profile similar to those of blood. Slice and maximum intensity projection (MIP) images were obtained using various imaging techniques and parameters;there was variation in repetition time, flip angle, imaging planes, and concentrations of contrast media. On slice images of three-dimensional (3D) time-of-flight (TOF) techniques, flow signal intensity was measured at five locations in the models, and contrast ratio was calculated as the difference between flow signal intensity (SI) and background signal intensity (SIb) divided by background signal intensity or (SI-SIb)/SIb. MIP images obtained by various techniques and using various parameters were also analyzed, with emphasis in the stenosis model on demonstrated degree of stenosis, severity of signal void and image distortion, and in the aneurysm model, on degree of visualization, distortion of contour and distribution of signals. In 3D TOF, the shortest TR (36 msec) and the largest FA (50 deg ) resulted in the highest contrast ratio, but larger flip angles did not effectively demonstrate the demonstration of the peripheral part of the aneurysm. Loss of signal was most prominent in images of the stenosis model obtained with parallel or oblique planes to the flow direction. The two-dimensional TOF technique also caused signal void in stenosis, but precisely demonstrated the aneurysm, with dense opacification of the peripheral part. The phase contrast technique showed some

  17. Analysis of the energetic metabolism in cyclic Bedouin goats (Capra hircus): Nychthemeral and seasonal variations of some haematochemical parameters in relation with body and ambient temperatures.

    Science.gov (United States)

    Malek, Mouna; Amirat, Zaina; Khammar, Farida; Khaldoun, Mounira

    2016-08-01

    Several studies have examined changes in some haematochemical parameters as a function of the different physiological status (cyclic, pregnant and lactating) of goats, but no relevant literature has exhaustively investigated these variations from anestrous to estrous stages in cyclic goats. In this paper, we report nychthemeral and seasonal variations in ambient and body temperatures, and in some haematochemical parameters (glycemia, cholesterolemia, triglyceridemia, creatininemia and uremia) measured during summer, winter and spring, in seven (7) experimental cyclic female Bedouin goats (Capra hircus) living in the Béni-Abbès region (Algerian Sahara desert). Cosinor rhythmometry procedure was used to determine the rhythmic parameters of ambient temperature and haematochemical parameters. To determine the effect of time of day on the rhythmicity of the studied parameters, as well as their seasonality, repeated measure analysis of variance (ANOVA) was applied. The results showed that in spite of the nychthemeral profile presented by the ambient temperature for each season, the body temperature remained in a narrow range, thus indicating a successful thermoregulation. The rhythmometry analysis showed a circadian rhythmicity of ambient temperature and haematochemical parameters with diurnal acrophases. A statistically significant effect of the time of day was shown on all studied haematochemical parameters, except on creatininemia. It was also found that only uremia, cholesterolemia and triglyceridemia followed the seasonal sexual activity of the studied ruminant. This study demonstrated the good physiological adaptation developed by this breed in response to the harsh climatic conditions of its natural environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  19. Routes to chaos in continuous mechanical systems: Part 2. Modelling transitions from regular to chaotic dynamics

    International Nuclear Information System (INIS)

    Krysko, A.V.; Awrejcewicz, J.; Papkova, I.V.; Krysko, V.A.

    2012-01-01

    In second part of the paper both classical and novel scenarios of transition from regular to chaotic dynamics of dissipative continuous mechanical systems are studied. A detailed analysis allowed us to detect the already known classical scenarios of transition from periodic to chaotic dynamics, and in particular the Feigenbaum scenario. The Feigenbaum constant was computed for all continuous mechanical objects studied in the first part of the paper. In addition, we illustrate and discuss different and novel scenarios of transition of the analysed systems from regular to chaotic dynamics, and we show that the type of scenario depends essentially on excitation parameters.

  20. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  1. Nonlinear Eigenvalue Problems in Elliptic Variational Inequalities: a local study

    International Nuclear Information System (INIS)

    Conrad, F.; Brauner, C.; Issard-Roch, F.; Nicolaenko, B.

    1985-01-01

    The authors consider a class of Nonlinear Eigenvalue Problems (N.L.E.P.) associated with Elliptic Variational Inequalities (E.V.I.). First the authors introduce the main tools for a local study of branches of solutions; the authors extend the linearization process required in the case of equations. Next the authors prove the existence of arcs of solutions close to regular vs singular points, and determine their local behavior up to the first order. Finally, the authors discuss the connection between their regularity condition and some stability concept. 37 references, 6 figures

  2. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  3. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  4. Density variation of parotid glands during IMRT for head–neck cancer: Correlation with treatment and anatomical parameters

    International Nuclear Information System (INIS)

    Fiorino, Claudio; Rizzo, Giovanna; Scalco, Elisa; Broggi, Sara; Belli, Maria Luisa; Dell’Oca, Italo; Dinapoli, Nicola; Ricchetti, Francesco; Rodriguez, Aldo Mejia; Di Muzio, Nadia; Calandrino, Riccardo; Sanguineti, Giuseppe; Valentini, Vincenzo; Cattaneo, Giovanni Mauro

    2012-01-01

    Purpose: Measuring parotid density changes in patients treated with IMRT for head–neck cancer (HNC) and assessing correlation with treatment-related parameters. Patients and materials: Data of 84 patients treated with IMRT for different HNC were pooled from three institutions. Parotid deformation and average Hounsfield number changes (ΔHU) were evaluated through MVCT (with Helical Tomotherapy) or diagnostic kVCT images taken at the treatment start/end. Parotids were delineated in the first image and propagated to the last using a previously validated algorithm based on elastic registration. The correlation between ΔHU and several treatment-related parameters was tested; then, logistic uni- and multi-variate analyses taking “large” ΔHU as end-point were carried out. Due to the better image quality, analyses were repeated considering only kVCT data. Results: ΔHU was negative in 116/168 parotids (69%; for kVCT patients: 72/92, 78%). The average ΔHU was significantly different from zero (−7.3, 0.20–0.25 HU/fraction, p m ean), and with neck thickness variation; these correlations were much stronger for kVCT data. Logistic analyses considering ΔHU m ean < 0.68) and initial neck thickness to be the most predictive variables (p < 0.0005, AUC = 0.683; AUC = 0.776 for kVCT); the odd ratio of large vs moderate/small parotid deformation was 3.8 and 8.0 for the whole and the kVCT population respectively. Conclusions: Parotid density reduced in most patients during IMRT and this phenomenon was highly correlated with parotid deformation. The individual assessment of density changes was highly reliable just with diagnostic KvCT. Density changes should be considered as an additional objective measurement of early parotid radiation-induced modifications; further research is warranted.

  5. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  6. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  7. Information operator approach and iterative regularization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Hilgers, S.; Bargen, A. von; Rozanov, A.; Eichmann, K.-U.; Savigny, C. von; Burrows, J.P.

    2007-01-01

    In this study, we present the main features of the information operator approach for solving linear inverse problems arising in atmospheric remote sensing. This method is superior to the stochastic version of the Tikhonov regularization (or the optimal estimation method) due to its capability to filter out the noise-dominated components of the solution generated by an inappropriate choice of the regularization parameter. We extend this approach to iterative methods for nonlinear ill-posed problems and derive the truncated versions of the Gauss-Newton and Levenberg-Marquardt methods. Although the paper mostly focuses on discussing the mathematical details of the inverse method, retrieval results have been provided, which exemplify the performances of the methods. These results correspond to the NO 2 retrieval from SCIAMACHY limb scatter measurements and have been obtained by using the retrieval processors developed at the German Aerospace Center Oberpfaffenhofen and Institute of Environmental Physics of the University of Bremen

  8. NSVZ scheme with the higher derivative regularization for N=1 SQED

    International Nuclear Information System (INIS)

    Kataev, A.L.; Stepanyantz, K.V.

    2013-01-01

    The exact NSVZ relation between a β-function of N=1 SQED and an anomalous dimension of the matter superfields is studied within the Slavnov higher derivative regularization approach. It is shown that if the renormalization group functions are defined in terms of the bare coupling constant, this relation is always valid. In the renormalized theory the NSVZ relation is obtained in the momentum subtraction scheme supplemented by a special finite renormalization. Unlike the dimensional reduction, the higher derivative regularization allows to fix this finite renormalization. This is made by imposing the conditions Z 3 (α,μ=Λ)=1 and Z(α,μ=Λ)=1 on the renormalization constants of N=1 SQED, where Λ is a parameter in the higher derivative term. The results are verified by the explicit three-loop calculation. In this approximation we relate the DR ¯ scheme and the NSVZ scheme defined within the higher derivative approach by the finite renormalization

  9. Stability of the Regular Hayward Thin-Shell Wormholes

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2016-01-01

    Full Text Available The aim of this paper is to construct regular Hayward thin-shell wormholes and analyze their stability. We adopt Israel formalism to calculate surface stresses of the shell and check the null and weak energy conditions for the constructed wormholes. It is found that the stress-energy tensor components violate the null and weak energy conditions leading to the presence of exotic matter at the throat. We analyze the attractive and repulsive characteristics of wormholes corresponding to ar>0 and ar<0, respectively. We also explore stability conditions for the existence of traversable thin-shell wormholes with arbitrarily small amount of fluid describing cosmic expansion. We find that the space-time has nonphysical regions which give rise to event horizon for 0parameter l=0.9. It is concluded that the Hayward and Van der Waals quintessence parameters increase the stability of thin-shell wormholes.

  10. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  11. Increase in winter haze over eastern China in recent decades: Roles of variations in meteorological parameters and anthropogenic emissions: INCREASE IN WINTER HAZE IN EASTERN CHINA

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, Richland Washington USA; Liao, Hong [School of Environmental Science and Engineering, Nanjing University of Information Science and Technology, Nanjing China; Joint International Research Laboratory of Climate and Environment Change, Nanjing University of Information Science and Technology, Nanjing China; Lou, Sijia [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, Richland Washington USA

    2016-11-05

    The increase in winter haze over eastern China in recent decades due to variations in meteorological parameters and anthropogenic emissions was quantified using observed atmospheric visibility from the National Climatic Data Center Global Summary of Day database for 1980–2014 and simulated PM2.5 concentrations for 1985–2005 from the Goddard Earth Observing System (GEOS) chemical transport model (GEOS-Chem). Observed winter haze days averaged over eastern China (105–122.5°E, 20–45°N) increased from 21 d in 1980 to 42 d in 2014, and from 22 to 30 d between 1985 and 2005. The GEOS-Chem model captured the increasing trend of winter PM2.5 concentrations for 1985–2005, with concentrations averaged over eastern China increasing from 16.1 μg m-3 in 1985 to 38.4 μg m-3 in 2005. Considering variations in both anthropogenic emissions and meteorological parameters, the model simulated an increase in winter surface-layer PM2.5 concentrations of 10.5 (±6.2) μg m-3 decade-1 over eastern China. The increasing trend was only 1.8 (±1.5) μg m-3 decade-1 when variations in meteorological parameters alone were considered. Among the meteorological parameters, the weakening of winds by -0.09 m s-1 decade-1 over 1985–2005 was found to be the dominant factor leading to the decadal increase in winter aerosol concentrations and haze days over eastern China during recent decades.

  12. Robust regularized singular value decomposition with application to mortality data

    KAUST Repository

    Zhang, Lingsong

    2013-09-01

    We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leaveone- row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application. © Institute of Mathematical Statistics, 2013.

  13. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  14. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  15. Stochastic methods of data modeling: application to the reconstruction of non-regular data

    International Nuclear Information System (INIS)

    Buslig, Leticia

    2014-01-01

    This research thesis addresses two issues or applications related to IRSN studies. The first one deals with the mapping of measurement data (the IRSN must regularly control the radioactivity level in France and, for this purpose, uses a network of sensors distributed among the French territory). The objective is then to predict, by means of reconstruction model which used observations, maps which will be used to inform the population. The second application deals with the taking of uncertainties into account in complex computation codes (the IRSN must perform safety studies to assess the risks of loss of integrity of a nuclear reactor in case of hypothetical accidents, and for this purpose, codes are used which simulate physical phenomena occurring within an installation). Some input parameters are not precisely known, and the author therefore tries to assess the impact of some uncertainties on simulated values. She notably aims at seeing whether variations of input parameters may push the system towards a behaviour which is very different from that obtained with parameters having a reference value, or even towards a state in which safety conditions are not met. The precise objective of this second part is then to a reconstruction model which is not costly (in terms of computation time) and to perform simulation in relevant areas (strong gradient areas, threshold overrun areas, so on). Two issues are then important: the choice of the approximation model and the construction of the experiment plan. The model is based on a kriging-type stochastic approach, and an important part of the work addresses the development of new numerical techniques of experiment planning. The first part proposes a generic criterion of adaptive planning, and reports its analysis and implementation. In the second part, an alternative to error variance addition is developed. Methodological developments are tested on analytic functions, and then applied to the cases of measurement mapping and

  16. The method of extraction of subspectra with appreciably different values of hyperfine interaction parameters from Moessbauer spectra

    International Nuclear Information System (INIS)

    Nemtsova, O.M.

    2006-01-01

    The task of Moessbauer spectra processing of complex locally inhomogeneous or multi-phase systems is to reveal subspectral contributions with appreciably different values of hyperfine interaction parameters (HFI) in them. A universal method of processing such spectra is suggested which allows to extract the probability density distribution (PDD) of HFI parameters corresponding to the subspectra with essentially different parameters values. The basis of the method is Tikhonov's regularization method with selection for each subspectrum its own value of the regularization parameter. The universal application of the method is demonstrated in the examples of processing real spectra with different sets of subspectral contributions

  17. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  18. Classification of coefficients of variation in experiments with commercial layers

    Directory of Open Access Journals (Sweden)

    DE Faria Filho

    2010-12-01

    Full Text Available This study aimed at determining a specific classification of coefficients of variation in experiments with commercial layers. Coefficients of variation were collected from papers published in Brazilian journals between 2000 and 2009 for performance, internal egg quality, and eggshell quality parameters. The coefficients of variation of each parameter were classified as low, intermediate, high, and very high according to the ratio between the median and the pseudo-sigma. It was concluded that the parameters used in experiments with commercial layers have a specific classification of coefficients of variation, and that this must be considered to evaluate experimental accuracy.

  19. Variation of inflammatory parameters after sibutramine treatment compared to placebo in type 2 diabetic patients.

    Science.gov (United States)

    Derosa, G; Maffioli, P; Ferrari, I; Palumbo, I; Randazzo, S; D'Angelo, A; Cicero, A F G

    2011-10-01

    The efficacy of sibutramine has been demonstrated in randomized trials in obese/overweight patients including those with type 2 diabetes mellitus (T2DM). Our objective was to evaluate the effects of 1-year treatment with sibutramine compared to placebo on body weight, glycaemic control, lipid profile, and inflammatory parameters in type 2 diabetic patients. Two hundred and forty-six patients with uncontrolled T2DM [glycated haemoglobin (HbA(1c) ) > 8·0%] in therapy with different oral hypoglycaemic agents or insulin were randomized to take 10 mg of sibutramine or placebo for 12 months. We evaluated at baseline, and after 3, 6, 9, and 12 months these parameters: body weight, body mass index (BMI), HbA(1c) , fasting plasma glucose (FPG), post-prandial plasma glucose (PPG), fasting plasma insulin (FPI), homeostasis model assessment insulin resistance index (HOMA-IR), total cholesterol (TC), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C), triglycerides (Tg), leptin, tumour necrosis factor-α (TNF-α), adiponectin (ADN), vaspin, high sensitivity C-reactive protein (Hs-CRP). We observed a decrease of body weight after 9 and 12 months in the group treated with sibutramine, but not in the control group. Regarding glycaemic and lipid profile, although there are differences seen over time within each of the groups, we did not obtain any significant differences between the two groups. Both placebo and sibutramine gave a similar improvement of HOMA-IR, leptin, TNF-α, ADN, and Hs-CRP. No vaspin variations were observed in either group. Sibutramine resulted in a decrease in body weight at 9 months and at 12 months that was not observed with placebo. Although there were differences seen over time within each of the groups, there were no significant differences between groups for any other parameter that we measured. © 2010 The Authors. JCPT © 2010 Blackwell Publishing Ltd.

  20. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.