WorldWideScience

Sample records for constrained total-variation minimization

  1. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  2. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  3. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem

    2014-04-18

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.

  4. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization

    International Nuclear Information System (INIS)

    Sidky, Emil Y; Pan Xiaochuan

    2008-01-01

    An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories

  5. Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    Purpose: The authors develop and investigate iterative image reconstruction algorithms based on data-discrepancy minimization with a total-variation (TV) constraint. The various algorithms are derived with different data-discrepancy measures reflecting the maximum likelihood (ML) principle......: An incremental algorithm framework is developed for this purpose. The instances of the incremental algorithms are derived for solving optimization problems including a data fidelity objective function combined with a constraint on the image TV. For the data fidelity term the authors, compare application....... Simulations demonstrate the iterative algorithms and the resulting image statistical properties for low-dose CT data acquired with sparse projection view angle sampling. Of particular interest is to quantify improvement of image statistical properties by use of the ML data fidelity term. Methods...

  6. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  7. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane

    2010-01-01

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation

  8. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  9. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  10. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  11. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  12. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  13. Iterative CT reconstruction via minimizing adaptively reweighted total variation.

    Science.gov (United States)

    Zhu, Lei; Niu, Tianye; Petrongolo, Michael

    2014-01-01

    Iterative reconstruction via total variation (TV) minimization has demonstrated great successes in accurate CT imaging from under-sampled projections. When projections are further reduced, over-smoothing artifacts appear in the current reconstruction especially around the structure boundaries. We propose a practical algorithm to improve TV-minimization based CT reconstruction on very few projection data. Based on the theory of compressed sensing, the L-0 norm approach is more desirable to further reduce the projection views. To overcome the computational difficulty of the non-convex optimization of the L-0 norm, we implement an adaptive weighting scheme to approximate the solution via a series of TV minimizations for practical use in CT reconstruction. The weight on TV is initialized as uniform ones, and is automatically changed based on the gradient of the reconstructed image from the previous iteration. The iteration stops when a small difference between the weighted TV values is observed on two consecutive reconstructed images. We evaluate the proposed algorithm on both a digital phantom and a physical phantom. Using 20 equiangular projections, our method reduces reconstruction errors in the conventional TV minimization by a factor of more than 5, with improved spatial resolution. By adaptively reweighting TV in iterative CT reconstruction, we successfully further reduce the projection number for the same or better image quality.

  14. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges

  15. The numerical solution of total variation minimization problems in image processing

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R.; Oman, M.E. [Montana State Univ., Bozeman, MT (United States)

    1994-12-31

    Consider the minimization of penalized least squares functionals of the form: f(u) = 1/2 ({parallel}Au {minus} z{parallel}){sup 2} + {alpha}{integral}{sub {Omega}}{vert_bar}{del}u{vert_bar}dx. Here A is a bounded linear operator, z represents data, {parallel} {center_dot} {parallel} is a Hilbert space norm, {alpha} is a positive parameter, {integral}{sub {Omega}}{vert_bar}{del}u{vert_bar} dx represents the total variation (TV) of a function u {element_of} BV ({Omega}), the class of functions of bounded variation on a bounded region {Omega}, and {vert_bar} {center_dot} {vert_bar} denotes Euclidean norm. In image processing, u represents an image which is to be recovered from noisy data z. Certain {open_quotes}blurring processes{close_quotes} may be represented by the action of an operator A on the image u.

  16. A Simply Constrained Optimization Reformulation of KKT Systems Arising from Variational Inequalities

    International Nuclear Information System (INIS)

    Facchinei, F.; Fischer, A.; Kanzow, C.; Peng, J.-M.

    1999-01-01

    The Karush-Kuhn-Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties

  17. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  18. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  19. Total Variation Based Parameter-Free Model for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Andersen, Martin Skovgaard

    2017-01-01

    We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained...... total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i. e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained...... reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities...

  20. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    Science.gov (United States)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  1. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.

  2. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  3. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  4. The Use of Trust Regions in Kohn-Sham Total Energy Minimization

    International Nuclear Information System (INIS)

    Yang, Chao; Meza, Juan C.; Wang, Lin-wang

    2006-01-01

    The Self Consistent Field (SCF) iteration, widely used for computing the ground state energy and the corresponding single particle wave functions associated with a many-electron atomistic system, is viewed in this paper as an optimization procedure that minimizes the Kohn-Sham total energy indirectly by minimizing a sequence of quadratic surrogate functions. We point out the similarity and difference between the total energy and the surrogate, and show how the SCF iteration can fail when the minimizer of the surrogate produces an increase in the KS total energy. A trust region technique is introduced as a way to restrict the update of the wave functions within a small neighborhood of an approximate solution at which the gradient of the total energy agrees with that of the surrogate. The use of trust region in SCF is not new. However, it has been observed that directly applying a trust region based SCF(TRSCF) to the Kohn-Sham total energy often leads to slow convergence. We propose to use TRSCF within a direct constrained minimization(DCM) algorithm we developed in dcm. The key ingredients of the DCM algorithm involve projecting the total energy function into a sequence of subspaces of small dimensions and seeking the minimizer of the total energy function within each subspace. The minimizer of a subspace energy function, which is computed by TRSCF, not only provides a search direction along which the KS total energy function decreases but also gives an optimal 'step-length' that yields a sufficient decrease in total energy. A numerical example is provided to demonstrate that the combination of TRSCF and DCM is more efficient than SCF

  5. On the notion of Jacobi fields in constrained calculus of variations

    Directory of Open Access Journals (Sweden)

    Massa Enrico

    2016-12-01

    Full Text Available In variational calculus, the minimality of a given functional under arbitrary deformations with fixed end-points is established through an analysis of the so called second variation. In this paper, the argument is examined in the context of constrained variational calculus, assuming piecewise differentiable extremals, commonly referred to as extremaloids. The approach relies on the existence of a fully covariant representation of the second variation of the action functional, based on a family of local gauge transformations of the original Lagrangian and on a set of scalar attributes of the extremaloid, called the corners' strengths [16]. In dis- cussing the positivity of the second variation, a relevant role is played by the Jacobi fields, defined as infinitesimal generators of 1-parameter groups of diffeomorphisms preserving the extremaloids. Along a piecewise differentiable extremal, these fields are generally discontinuous across the corners. A thorough analysis of this point is presented. An alternative characterization of the Jacobi fields as solutions of a suitable accessory variational problem is established.

  6. Fourier-based reconstruction via alternating direction total variation minimization in linear scan CT

    International Nuclear Information System (INIS)

    Cai, Ailong; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2015-01-01

    In this study, we consider a novel form of computed tomography (CT), that is, linear scan CT (LCT), which applies a straight line trajectory. Furthermore, an iterative algorithm is proposed for pseudo-polar Fourier reconstruction through total variation minimization (PPF-TVM). Considering that the sampled Fourier data are distributed in pseudo-polar coordinates, the reconstruction model minimizes the TV of the image subject to the constraint that the estimated 2D Fourier data for the image are consistent with the 1D Fourier transform of the projection data. PPF-TVM employs the alternating direction method (ADM) to develop a robust and efficient iteration scheme, which ensures stable convergence provided that appropriate parameter values are given. In the ADM scheme, PPF-TVM applies the pseudo-polar fast Fourier transform and its adjoint to iterate back and forth between the image and frequency domains. Thus, there is no interpolation in the Fourier domain, which makes the algorithm both fast and accurate. PPF-TVM is particularly useful for limited angle reconstruction in LCT and it appears to be robust against artifacts. The PPF-TVM algorithm was tested with the FORBILD head phantom and real data in comparisons with state-of-the-art algorithms. Simulation studies and real data verification suggest that PPF-TVM can reconstruct higher accuracy images with lower time consumption

  7. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  8. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  9. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  10. Salt-and-pepper noise removal using modified mean filter and total variation minimization

    Science.gov (United States)

    Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.

    2018-01-01

    The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.

  11. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  12. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  13. Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Yongle Li

    2014-01-01

    Full Text Available We propose a new method of image restoration for catadioptric defocus blur using omnitotal variation (Omni-TV minimization based on omnigradient. Catadioptric omnidirectional imaging systems usually consist of conventional cameras and curved mirrors for capturing 360° field of view. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. In an omnidirectional image, two points near each other may not be close to one another in the 3D scene. Traditional gradient computation cannot be directly applied to omnidirectional image processing. Thus, omnigradient computing method combined with the characteristics of catadioptric omnidirectional imaging is proposed. Following this Omni-TV minimization is used as the constraint for deconvolution regularization, leading to the restoration of defocus blur in an omnidirectional image to obtain all sharp omnidirectional images. The proposed method is important for improving catadioptric omnidirectional imaging quality and promoting applications in related fields like omnidirectional video and image processing.

  14. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    Science.gov (United States)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  15. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  16. Tuber size variation and organ preformation constrain growth responses of a spring geophyte.

    Science.gov (United States)

    Werger, Marinus J A; Huber, Heidrun

    2006-03-01

    Functional responses to environmental variation do not only depend on the genetic potential of a species to express different trait values, but can also be limited by characteristics, such as the timing of organ (pre-) formation, aboveground longevity or the presence of a storage organ. In this experiment we tested to what degree variation in tuber size and organ preformation constrain the responsiveness to environmental quality and whether responsiveness is modified by the availability of stored resources by exposing the spring geophyte Bunium bulbocastanum to different light and nutrient regimes. Growth and biomass partitioning were affected by initial tuber size and resource availability. On average, tuber weight amounted to 60%, but never less than 30% of the total plant biomass. Initial tuber size, considered an estimate of the total carbon pool available at the onset of treatments, affected plant growth and reproduction throughout the experiment but had little effect on the responsiveness of plants to the treatments. The responsiveness was partly constrained by organ preformation: in the second year variation of leaf number was considerably larger than in the first year of the treatments. The results indicate that a spring geophyte with organ preformation has only limited possibilities to respond to short-term fluctuations of the environment, as all leaves and the inflorescence are preformed in the previous growing season and resources stored in tubers are predominantly used for survival during dormancy and are not invested into plastic adjustments to environmental quality. Such spring geophytes have only limited possibilities to buffer environmental variation. This explains their restriction to habitats characterized by predictable changes of the environmental conditions.

  17. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo; Schö nlieb, Carola-Bibiane

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a

  18. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  19. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  20. Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

    Directory of Open Access Journals (Sweden)

    Andreas Langer

    2018-01-01

    Full Text Available In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution at all, i.e., the solution is the same whether a box-constraint is used or not. On the contrary, i.e., for applications where a box-constraint may have influence on the solution, we investigate how much it effects the quality of the restoration, especially when the regularization parameter, which weights the importance of the data term and the regularizer, is chosen suitable. In particular, for such applications, we consider the case of a squared L 2 data-fidelity term. For computing a minimizer of the respective box-constrained optimization problems a primal-dual semi-smooth Newton method is presented, which guarantees superlinear convergence.

  1. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  2. A constrained variational calculation for beta-stable matter

    International Nuclear Information System (INIS)

    Howes, C.; Bishop, R.F.; Irvine, J.M

    1978-01-01

    A method of lowest-order constrained variation previously applied by the authors to asymmetric nuclear matter is extended to include electrons and muons making the nucleon fluid electrically neutral and stable against beta decay. The equilibrium composition of a nucleon fluid is calculated as a function of baryon number density and an equation of state for beta-stable matter is deduced for the Reid soft-core interaction. (author)

  3. A Volume Constrained Variational Problem with Lower-Order Terms

    International Nuclear Information System (INIS)

    Morini, M.; Rieger, M.O.

    2003-01-01

    We study a one-dimensional variational problem with two or more level set constraints. The existence of global and local minimizers turns out to be dependent on the regularity of the energy density. A complete characterization of local minimizers and the underlying energy landscape is provided. The Γ -limit when the phases exhaust the whole domain is computed

  4. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    Science.gov (United States)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  5. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  6. Variational method for the minimization of entropy generation in solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Smit, Sjoerd; Kessels, W. M. M., E-mail: w.m.m.kessels@tue.nl [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2015-04-07

    In this work, a method is presented to extend traditional solar cell simulation tools to make it possible to calculate the most efficient design of practical solar cells. The method is based on the theory of nonequilibrium thermodynamics, which is used to derive an expression for the local entropy generation rate in the solar cell, making it possible to quantify all free energy losses on the same scale. The framework of non-equilibrium thermodynamics can therefore be combined with the calculus of variations and existing solar cell models to minimize the total entropy generation rate in the cell to find the most optimal design. The variational method is illustrated by applying it to a homojunction solar cell. The optimization results in a set of differential algebraic equations, which determine the optimal shape of the doping profile for given recombination and transport models.

  7. On the Support of Minimizers of Causal Variational Principles

    Science.gov (United States)

    Finster, Felix; Schiefeneder, Daniela

    2013-11-01

    A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.

  8. Minimizing total tardiness in a software developing company

    Directory of Open Access Journals (Sweden)

    Ícaro Ludwig

    2013-03-01

    Full Text Available Small companies in service sectors, such as software developers, usually rely on manual-based programming tasks. That programming yields satisfactory results for small task lists, but leads to managerial difficulties as a large number of tasks increases task delays. This paper aims at using scheduling tools to minimize such delays. For that it proposes two heuristics for task scheduling based on the following steps: (i define an initial order for tasks, (ii distribute each task to development teams, and (iii schedule the tasks in each development team aimed at minimizing total tardiness. The proposed approach reduced the total tardiness in simulated in real data, simplified the process of scheduling and provided better tracking of the development process.

  9. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  10. Early failure mechanisms of constrained tripolar acetabular sockets used in revision total hip arthroplasty.

    Science.gov (United States)

    Cooke, Christopher C; Hozack, William; Lavernia, Carlos; Sharkey, Peter; Shastri, Shani; Rothman, Richard H

    2003-10-01

    Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.

  11. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  12. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  13. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  14. Minimizers with discontinuous velocities for the electromagnetic variational method

    International Nuclear Information System (INIS)

    De Luca, Jayme

    2010-01-01

    The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Lienard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.

  15. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  16. A constrained Hartree-Fock-Bogoliubov equation derived from the double variational method

    International Nuclear Information System (INIS)

    Onishi, Naoki; Horibata, Takatoshi.

    1980-01-01

    The double variational method is applied to the intrinsic state of the generalized BCS wave function. A constrained Hartree-Fock-Bogoliubov equation is derived explicitly in the form of an eigenvalue equation. A method of obtaining approximate overlap and energy overlap integrals is proposed. This will help development of numerical calculations of the angular momentum projection method, especially for general intrinsic wave functions without any symmetry restrictions. (author)

  17. Design of a minimally constraining, passively supported gait training exoskeleton: ALEX II.

    Science.gov (United States)

    Winfree, Kyle N; Stegall, Paul; Agrawal, Sunil K

    2011-01-01

    This paper discusses the design of a new, minimally constraining, passively supported gait training exoskeleton known as ALEX II. This device builds on the success and extends the features of the ALEX I device developed at the University of Delaware. Both ALEX (Active Leg EXoskeleton) devices have been designed to supply a controllable torque to a subject's hip and knee joint. The current control strategy makes use of an assist-as-needed algorithm. Following a brief review of previous work motivating this redesign, we discuss the key mechanical features of the new ALEX device. A short investigation was conducted to evaluate the effectiveness of the control strategy and impact of the exoskeleton on the gait of six healthy subjects. This paper concludes with a comparison between the subjects' gait both in and out of the exoskeleton. © 2011 IEEE

  18. Total Variation Depth for Functional Data

    KAUST Repository

    Huang, Huang

    2016-11-15

    There has been extensive work on data depth-based methods for robust multivariate data analysis. Recent developments have moved to infinite-dimensional objects such as functional data. In this work, we propose a new notion of depth, the total variation depth, for functional data. As a measure of depth, its properties are studied theoretically, and the associated outlier detection performance is investigated through simulations. Compared to magnitude outliers, shape outliers are often masked among the rest of samples and harder to identify. We show that the proposed total variation depth has many desirable features and is well suited for outlier detection. In particular, we propose to decompose the total variation depth into two components that are associated with shape and magnitude outlyingness, respectively. This decomposition allows us to develop an effective procedure for outlier detection and useful visualization tools, while naturally accounting for the correlation in functional data. Finally, the proposed methodology is demonstrated using real datasets of curves, images, and video frames.

  19. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  20. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    Science.gov (United States)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  1. Fast magnetic resonance imaging based on high degree total variation

    Science.gov (United States)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  2. Total variation-based neutron computed tomography

    Science.gov (United States)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  3. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.

    2014-01-19

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  4. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon

    2014-01-01

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  5. Asymptotic Behaviour of Total Generalised Variation

    KAUST Repository

    Papafitsoros, Konstantinos; Valkonen, Tuomo

    2015-01-01

    © Springer International Publishing Switzerland 2015. The recently introduced second order total generalised variation functional TGV2 β,α has been a successful regulariser for image processing purposes. Its definition involves two positive parameters α and β whose values determine the amount and the quality of the regularisation. In this paper we report on the behaviour of TGV2 β,α in the cases where the parameters α, β as well as their ratio β/α becomes very large or very small. Among others, we prove that for sufficiently symmetric two dimensional data and large ratio β/α, TGV2 β,α regularisation coincides with total variation (TV) regularization

  6. Strike type variation among Tarahumara Indians in minimal sandals versus conventional running shoes

    Directory of Open Access Journals (Sweden)

    Daniel E. Lieberman

    2014-06-01

    Conclusion: These data reinforce earlier studies that there is variation among foot strike patterns among minimally shod runners, but also support the hypothesis that foot stiffness and important aspects of running form, including foot strike, differ between runners who grow up using minimal versus modern, conventional footwear.

  7. Triple Hierarchical Variational Inequalities with Constraints of Mixed Equilibria, Variational Inequalities, Convex Minimization, and Hierarchical Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by virtue of Korpelevich's extragradient method, viscosity approximation method, hybrid steepest-descent method, and averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inequality problems (VIPs, the solution set of general system of variational inequalities (GSVI, and the set of minimizers of convex minimization problem (CMP, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solve a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many VIPs, GSVI, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others.

  8. On the uniqueness of minimizers for a class of variational problems with Polyconvex integrand

    KAUST Repository

    Awi, Romeo

    2017-02-05

    We prove existence and uniqueness of minimizers for a family of energy functionals that arises in Elasticity and involves polyconvex integrands over a certain subset of displacement maps. This work extends previous results by Awi and Gangbo to a larger class of integrands. First, we study these variational problems over displacements for which the determinant is positive. Second, we consider a limit case in which the functionals are degenerate. In that case, the set of admissible displacements reduces to that of incompressible displacements which are measure preserving maps. Finally, we establish that the minimizer over the set of incompressible maps may be obtained as a limit of minimizers corresponding to a sequence of minimization problems over general displacements provided we have enough regularity on the dual problems. We point out that these results defy the direct methods of the calculus of variations.

  9. Hybrid Iterative Scheme for Triple Hierarchical Variational Inequalities with Mixed Equilibrium, Variational Inclusion, and Minimization Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by combining Korpelevich's extragradient method, the hybrid steepest-descent method, and the averaged mapping approach to the gradient-projection algorithm. It is proven that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of finitely many nonexpansive mappings, the solution set of a generalized mixed equilibrium problem (GMEP, the solution set of finitely many variational inclusions, and the solution set of a convex minimization problem (CMP, which is also a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solving a hierarchical variational inequality problem with constraints of the GMEP, the CMP, and finitely many variational inclusions.

  10. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  11. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  12. Lowest-order constrained variational method for simple many-fermion systems

    International Nuclear Information System (INIS)

    Alexandrov, I.; Moszkowski, S.A.; Wong, C.W.

    1975-01-01

    The authors study the potential energy of many-fermion systems calculated by the lowest-order constrained variational (LOCV) method of Pandharipande. Two simple two-body interactions are used. For a simple hard-core potential in a dilute Fermi gas, they find that the Huang-Yang exclusion correction can be used to determine a healing distance. The result is close to the older Pandharipande prescription for the healing distance. For a hard core plus attractive exponential potential, the LOCV result agrees closely with the lowest-order separation method of Moszkowski and Scott. They find that the LOCV result has a shallow minimum as a function of the healing distance at the Moszkowski-Scott separation distance. The significance of the absence of a Brueckner dispersion correction in the LOCV result is discussed. (Auth.)

  13. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  14. Fractional action-like variational problems in holonomic, non-holonomic and semi-holonomic constrained and dissipative dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    EL-Nabulsi, Ahmad Rami [Department of Nuclear and Energy Engineering, Cheju National University, Ara-dong 1, Jeju 690-756 (Korea, Republic of)], E-mail: nabulsiahmadrami@yahoo.fr

    2009-10-15

    We communicate through this work the fractional calculus of variations and its corresponding Euler-Lagrange equations in 1D constrained holonomic, non-holonomic, and semi-holonomic dissipative dynamical system. The extension of the laws obtained to the 2D space state is done. Some interesting consequences are revealed.

  15. Variational Approach to the Orbital Stability of Standing Waves of the Gross-Pitaevskii Equation

    KAUST Repository

    Hadj Selem, Fouad; Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    This paper is concerned with the mathematical analysis of a masssubcritical nonlinear Schrödinger equation arising from fiber optic applications. We show the existence and symmetry of minimizers of the associated constrained variational problem. We

  16. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  17. Security-Constrained Unit Commitment in AC Microgrids Considering Stochastic Price-Based Demand Response and Renewable Generation

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    In this paper, a stochastic model for scheduling of AC security‐constrained unit commitment associated with demand response (DR) actions is developed in an islanded residential microgrid. The proposed model maximizes the expected profit of microgrid operator and minimizes the total customers...

  18. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  19. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  20. The nonholonomic variational principle

    Energy Technology Data Exchange (ETDEWEB)

    Krupkova, Olga [Department of Algebra and Geometry, Faculty of Science, Palacky University, Tomkova 40, 779 00 Olomouc (Czech Republic); Department of Mathematics, La Trobe University, Bundoora, Victoria 3086 (Australia)], E-mail: krupkova@inf.upol.cz

    2009-05-08

    A variational principle for mechanical systems and fields subject to nonholonomic constraints is found, providing Chetaev-reduced equations as equations for extremals. Investigating nonholonomic variations of the Chetaev type and their properties, we develop foundations of the calculus of variations on constraint manifolds, modelled as fibred submanifolds in jet bundles. This setting is appropriate to study general first-order 'nonlinear nonitegrable constraints' that locally are given by a system of first-order ordinary or partial differential equations. We obtain an invariant constrained first variation formula and constrained Euler-Lagrange equations both in intrinsic and coordinate forms, and show that the equations are the same as Chetaev equations 'without Lagrange multipliers', introduced recently by other methods. We pay attention to two possible settings: first, when the constrained system arises from an unconstrained Lagrangian system defined in a neighbourhood of the constraint, and second, more generally, when an 'internal' constrained system on the constraint manifold is given. In the latter case a corresponding unconstrained system need not be a Lagrangian, nor even exist. We also study in detail an important particular case: nonholonomic constraints that can be alternatively modelled by means of (co)distributions in the total space of the fibred manifold; in nonholonomic mechanics this happens whenever constraints affine in velocities are considered. It becomes clear that (and why) if the distribution is completely integrable (= the constraints are semiholonomic), the principle of virtual displacements holds and can be used to obtain the constrained first variational formula by a more or less standard procedure, traditionally used when unconstrained or holonomic systems are concerned. If, however, the constraint is nonintegrable, no significant simplifications are available. Among others, some properties of nonholonomic

  1. The trivector approach for minimally invasive total knee arthroplasty: a technical note.

    Science.gov (United States)

    Benazzo, Francesco; Rossi, Stefano Marco Paolo

    2012-09-01

    One of the main criticisms of minimally invasive approaches in total knee arthroplasty has been their poor adaptability in cases of major deformity or stiffness of the knee joint. When they are used in such cases, excessive soft-tissue tension is needed to provide appropriate joint exposure. Here, we describe the "mini trivector approach," which has become our standard approach for total knee replacement because it permits us to enlarge the indication for minimally or less invasive total knee replacement to many knees where quad sparing, a subvastus approach, or a mini quad or mini midvastus snip may not be sufficient to achieve correct exposure. It consists of a limited double snip of the VMO and the quadriceps tendon that reduces tension on the extensor mechanism and allows easier verticalization of the patella as well as good joint exposure.

  2. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  3. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    Science.gov (United States)

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  4. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  5. Biological variation of total prostate-specific antigen

    DEFF Research Database (Denmark)

    Söletormos, Georg; Semjonow, Axel; Sibley, Paul E C

    2005-01-01

    BACKGROUND: The objectives of this study were to determine whether a single result for total prostate-specific antigen (tPSA) can be used confidently to guide the need for prostate biopsy and by how much serial tPSA measurements must differ to be significant. tPSA measurements include both...... analytical and biological components of variation. The European Group on Tumor Markers conducted a literature survey to determine both the magnitude and impact of biological variation on single, the mean of replicate, and serial tPSA measurements. METHODS: The survey yielded 27 studies addressing the topic......, and estimates for the biological variation of tPSA could be derived from 12 of these studies. RESULTS: The mean biological variation was 20% in the concentration range 0.1-20 microg/L for men over 50 years. The biological variation means that the one-sided 95% confidence interval (CI) of the dispersion...

  6. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    Science.gov (United States)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  7. Use of a constrained tripolar acetabular liner to treat intraoperative instability and postoperative dislocation after total hip arthroplasty: a review of our experience.

    Science.gov (United States)

    Callaghan, John J; O'Rourke, Michael R; Goetz, Devon D; Lewallen, David G; Johnston, Richard C; Capello, William N

    2004-12-01

    Constrained acetabular components have been used to treat certain cases of intraoperative instability and postoperative dislocation after total hip arthroplasty. We report our experience with a tripolar constrained component used in these situations since 1988. The outcomes of the cases where this component was used were analyzed for component failure, component loosening, and osteolysis. At average 10-year followup, for cases treated for intraoperative instability (2 cases) or postoperative dislocation (4 cases), the component failure rate was 6% (6 of 101 hips in 5 patients). For cases where the constrained liner was cemented into a fixed cementless acetabular shell, the failure rate was 7% (2 of 31 hips in 2 patients) at 3.9-year average followup. Use of a constrained liner was not associated with an increased osteolysis or aseptic loosening rate. This tripolar constrained acetabular liner provided total hip arthroplasty construct stability in most cases in which it was used for intraoperative instability or postoperative dislocation.

  8. Evaluating terrestrial water storage variations from regionally constrained GRACE mascon data and hydrological models over Southern Africa – Preliminary results

    DEFF Research Database (Denmark)

    Krogh, Pernille Engelbredt; Andersen, Ole Baltazar; Michailovsky, Claire Irene B.

    2010-01-01

    ). In this paper we explore an experimental set of regionally constrained mascon blocks over Southern Africa where a system of 1.25° × 1.5° and 1.5° × 1.5° blocks has been designed. The blocks are divided into hydrological regions based on drainage patterns of the largest river basins, and are constrained...... Malawi with water level from altimetry. Results show that weak constraints across regions in addition to intra-regional constraints are necessary, to reach reasonable mass variations....

  9. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  10. Topology Optimization for Minimizing the Resonant Response of Plates with Constrained Layer Damping Treatment

    Directory of Open Access Journals (Sweden)

    Zhanpeng Fang

    2015-01-01

    Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.

  11. Uniform discretizations: a quantization procedure for totally constrained systems including gravity

    Energy Technology Data Exchange (ETDEWEB)

    Campiglia, Miguel [Instituto de Fisica, Facultad de Ciencias, Igua 4225, esq. Mataojo, Montevideo (Uruguay); Di Bartolo, Cayetano [Departamento de Fisica, Universidad Simon BolIvar, Aptdo. 89000, Caracas 1080-A (Venezuela); Gambini, Rodolfo [Instituto de Fisica, Facultad de Ciencias, Igua 4225, esq. Mataojo, Montevideo (Uruguay); Pullin, Jorge [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)

    2007-05-15

    We present a new method for the quantization of totally constrained systems including general relativity. The method consists in constructing discretized theories that have a well defined and controlled continuum limit. The discrete theories are constraint-free and can be readily quantized. This provides a framework where one can introduce a relational notion of time and that nevertheless approximates in a well defined fashion the theory of interest. The method is equivalent to the group averaging procedure for many systems where the latter makes sense and provides a generalization otherwise. In the continuum limit it can be shown to contain, under certain assumptions, the 'master constraint' of the 'Phoenix project'. It also provides a correspondence principle with the classical theory that does not require to consider the semiclassical limit.

  12. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  13. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  14. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  15. A variational proof of Thomson's theorem

    Energy Technology Data Exchange (ETDEWEB)

    Fiolhais, Miguel C.N., E-mail: miguel.fiolhais@cern.ch [Department of Physics, City College of the City University of New York, 160 Convent Avenue, New York, NY 10031 (United States); Department of Physics, New York City College of Technology, 300 Jay Street, Brooklyn, NY 11201 (United States); LIP, Department of Physics, University of Coimbra, 3004-516 Coimbra (Portugal); Essén, Hanno [Department of Mechanics, Royal Institute of Technology (KTH), Stockholm SE-10044 (Sweden); Gouveia, Tomé M. [Cavendish Laboratory, 19 JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2016-08-12

    Thomson's theorem of electrostatics, which states the electric charge on a set of conductors distributes itself on the conductor surfaces to minimize the electrostatic energy, is reviewed in this letter. The proof of Thomson's theorem, based on a variational principle, is derived for a set of normal charged conductors, with and without the presence of external electric fields produced by fixed charge distributions. In this novel approach, the variations are performed on both the charge densities and electric potentials, by means of a local Lagrange multiplier associated with Poisson's equation, constraining the two variables.

  16. Speckle Noise Reduction via Nonconvex High Total Variation Approach

    Directory of Open Access Journals (Sweden)

    Yulian Wu

    2015-01-01

    Full Text Available We address the problem of speckle noise removal. The classical total variation is extensively used in this field to solve such problem, but this method suffers from the staircase-like artifacts and the loss of image details. In order to resolve these problems, a nonconvex total generalized variation (TGV regularization is used to preserve both edges and details of the images. The TGV regularization which is able to remove the staircase effect has strong theoretical guarantee by means of its high order smooth feature. Our method combines the merits of both the TGV method and the nonconvex variational method and avoids their main drawbacks. Furthermore, we develop an efficient algorithm for solving the nonconvex TGV-based optimization problem. We experimentally demonstrate the excellent performance of the technique, both visually and quantitatively.

  17. Convex Minimization with Constraints of Systems of Variational Inequalities, Mixed Equilibrium, Variational Inequality, and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze one iterative algorithm by hybrid shrinking projection method for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: finitely many generalized mixed equilibrium problems, finitely many variational inequalities, the general system of variational inequalities and the fixed point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another iterative algorithm by hybrid shrinking projection method for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  18. An Improved Variational Method for Hyperspectral Image Pansharpening with the Constraint of Spectral Difference Minimization

    Science.gov (United States)

    Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.

    2017-09-01

    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  19. Global distribution of total ozone and lower stratospheric temperature variations

    Directory of Open Access Journals (Sweden)

    W. Steinbrecht

    2003-01-01

    Full Text Available This study gives an overview of interannual variations of total ozone and 50 hPa temperature. It is based on newer and longer records from the 1979 to 2001 Total Ozone Monitoring Spectrometer (TOMS and Solar Backscatter Ultraviolet (SBUV instruments, and on US National Center for Environmental Prediction (NCEP reanalyses. Multiple linear least squares regression is used to attribute variations to various natural and anthropogenic explanatory variables. Usually, maps of total ozone and 50 hPa temperature variations look very similar, reflecting a very close coupling between the two. As a rule of thumb, a 10 Dobson Unit (DU change in total ozone corresponds to a 1 K change of 50 hPa temperature. Large variations come from the linear trend term, up to -30 DU or -1.5 K/decade, from terms related to polar vortex strength, up to 50 DU or 5 K (typical, minimum to maximum, from tropospheric meteorology, up to 30 DU or 3 K, or from the Quasi-Biennial Oscillation (QBO, up to 25 DU or 2.5 K. The 11-year solar cycle, up to 25 DU or 2.5 K, or El Niño/Southern Oscillation (ENSO, up to 10 DU or 1 K, are contributing smaller variations. Stratospheric aerosol after the 1991 Pinatubo eruption lead to warming up to 3 K at low latitudes and to ozone depletion up to 40 DU at high latitudes. Variations attributed to QBO, polar vortex strength, and to a lesser degree to ENSO, exhibit an inverse correlation between low latitudes and higher latitudes. Variations related to the solar cycle or 400 hPa temperature, however, have the same sign over most of the globe. Variations are usually zonally symmetric at low and mid-latitudes, but asymmetric at high latitudes. There, position and strength of the stratospheric anti-cyclones over the Aleutians and south of Australia appear to vary with the phases of solar cycle, QBO or ENSO.

  20. Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction

    International Nuclear Information System (INIS)

    Hintermüller, Michael; Rautenberg, Carlos N; Hahn, Jooyoung

    2014-01-01

    Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown. (paper)

  1. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  2. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  3. Permanent magnet design for magnetic heat pumps using total cost minimization

    Science.gov (United States)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Niknia, I.; Rowe, A.

    2017-11-01

    The active magnetic regenerator (AMR) is an attractive technology for efficient heat pumps and cooling systems. The costs associated with a permanent magnet for near room temperature applications are a central issue which must be solved for broad market implementation. To address this problem, we present a permanent magnet topology optimization to minimize the total cost of cooling using a thermoeconomic cost-rate balance coupled with an AMR model. A genetic algorithm identifies cost-minimizing magnet topologies. For a fixed temperature span of 15 K and 4.2 kg of gadolinium, the optimal magnet configuration provides 3.3 kW of cooling power with a second law efficiency (ηII) of 0.33 using 16.3 kg of permanent magnet material.

  4. Resource Constrained Project Scheduling Subject to Due Dates: Preemption Permitted with Penalty

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available Extensive research works have been carried out in resource constrained project scheduling problem. However, scarce researches have studied the problems in which a setup cost must be incurred if activities are preempted. In this research, we investigate the resource constrained project scheduling problem to minimize the total project cost, considering earliness-tardiness and preemption penalties. A mixed integer programming formulation is proposed for the problem. The resulting problem is NP-hard. So, we try to obtain a satisfying solution using simulated annealing (SA algorithm. The efficiency of the proposed algorithm is tested based on 150 randomly produced examples. Statistical comparison in terms of the computational times and objective function indicates that the proposed algorithm is efficient and effective.

  5. Combined First and Second Order Total Variation Inpainting using Split Bregman

    KAUST Repository

    Papafitsoros, Konstantinos

    2013-07-12

    In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schdönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting.

  6. Combined First and Second Order Total Variation Inpainting using Split Bregman

    KAUST Repository

    Papafitsoros, Konstantinos; Schoenlieb, Carola Bibiane; Sengul, Bati

    2013-01-01

    In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schdönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting.

  7. Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures

    Science.gov (United States)

    Olmedo, Javier

    2016-06-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  8. Bulk diffusion in a kinetically constrained lattice gas

    Science.gov (United States)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  9. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  10. Beam’s-eye-view dosimetrics (BEVD) guided rotational station parameter optimized radiation therapy (SPORT) planning based on reweighted total-variation minimization

    Science.gov (United States)

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Xing, Lei

    2015-03-01

    Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans.

  11. Beam’s-eye-view dosimetrics (BEVD) guided rotational station parameter optimized radiation therapy (SPORT) planning based on reweighted total-variation minimization

    International Nuclear Information System (INIS)

    Kim, Hojin; Li, Ruijiang; Xing, Lei; Lee, Rena

    2015-01-01

    Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans

  12. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  13. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  14. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  15. Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.

    Science.gov (United States)

    Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang

    2017-12-01

    Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

  16. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    International Nuclear Information System (INIS)

    Choudhury, Sayantan

    2014-01-01

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f NL , τ NL and g NL for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A CMB , using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c s ≤1, and tensor to scalar, 10 −22 ≤r ⋆ ≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f NL ≤8.5, O(75−150)≤τ NL ≤2800 and O(17.4−34.7)≤g NL ≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A CMB ≤0.09

  17. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India)

    2014-04-15

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f{sub NL} , τ{sub NL} and g{sub NL} for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A{sub CMB}, using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c{sub s}≤1, and tensor to scalar, 10{sup −22}≤r{sub ⋆}≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f{sub NL}≤8.5, O(75−150)≤τ{sub NL}≤2800 and O(17.4−34.7)≤g{sub NL}≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A{sub CMB}≤0.09.

  18. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  19. Variational minimization of atomic and molecular ground-state energies via the two-particle reduced density matrix

    International Nuclear Information System (INIS)

    Mazziotti, David A.

    2002-01-01

    Atomic and molecular ground-state energies are variationally determined by constraining the two-particle reduced density matrix (2-RDM) to satisfy positivity conditions. Because each positivity condition corresponds to correcting the ground-state energies for a class of Hamiltonians with two-particle interactions, these conditions collectively provide a new approach to many-body theory that, unlike perturbation theory, can capture significantly correlated phenomena including the multireference effects of potential-energy surfaces. The D, Q, and G conditions for the 2-RDM are extended through generalized lifting operators inspired from the formal solution of N-representability. These lifted conditions agree with the hierarchy of positivity conditions presented by Mazziotti and Erdahl [Phys. Rev. A 63, 042113 (2001)]. The connection between positivity and the formal solution explains how constraining higher RDMs to be positive semidefinite improves the N representability of the 2-RDM and suggests using pieces of higher positivity conditions that computationally scale like the D condition. With the D, Q, and G conditions as well as pieces of higher positivity the electronic energies for Be, LiH, H 2 O, and BH are computed through a primal-dual interior-point algorithm for positive semidefinite programming. The variational method produces potential-energy surfaces that are highly accurate even far from the equilibrium geometry where single-reference perturbation-based methods often fail to produce realistic energies

  20. Constraining non-minimally coupled tachyon fields by the Noether symmetry

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2009-01-01

    A model for a homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. The Noether symmetry is used to find expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is responsible for the decelerated-accelerated transition period.

  1. Total Generalized Variation for Manifold-valued Data

    OpenAIRE

    Bredies, K.; Holler, M.; Storath, M.; Weinmann, A.

    2017-01-01

    In this paper we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present two possible concrete instances that fulfill the proposed axioms. We provide well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for syntheti...

  2. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Chen, Dali; Chen, YangQuan; Xue, Dingyu

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  3. PENGHILANGAN NOISE PADA CITRA BERWARNA DENGAN METODE TOTAL VARIATION

    Directory of Open Access Journals (Sweden)

    Anny Yuniarti

    2006-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Saat ini multimedia telah menjadi teknologi yang cukup dominan. Tukar menukar informasi dalam bentuk citra sudah banyak dilakukan oleh masyarakat. Citra dengan kualitas yang baik sangat diperlukan dalam penyajian informasi. Citra yang memiliki noise kurang baik digunakan sebagai sarana informasi, oleh karena itu diperlukan suatu metode untuk memperbaiki kualitas citra. Metode yang digunakan dalam penelitian ini adalah metode total variation untuk penghilangan noise yang dapat diterapkan untuk model warna nonlinier, yaitu Chromaticity-Brightness (CB dan Hue-Saturation-Value (HSV. Filter total variation disebut filter yang bergantung pada data citra karena koefisien filternya diperoleh dari pemrosesan data citra dengan rumusan yang baku. Sehingga filter mask untuk masing-masing piksel memiliki kombinasi koefisien yang berbeda. Metode ini menggunakan proses iterasi untuk menyelesaikan persamaan dasar yang nonlinier. Uji coba dilakukan dengan menggunakan 30 data dengan berbagai jenis noise, yaitu gaussian, salt and pepper dan speckle. Uji coba pembandingan dengan metode filter median dan filter rata-rata. Dari percobaan ini menunjukkan bahwa metode total variation menghasilkan citra yang lebih baik daripada metode

  4. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  5. Minimizing the Fluid Used to Induce Fracturing

    Science.gov (United States)

    Boyle, E. J.

    2015-12-01

    The less fluid injected to induce fracturing means less fluid needing to be produced before gas is produced. One method is to inject as fast as possible until the desired fracture length is obtained. Presented is an alternative injection strategy derived by applying optimal system control theory to the macroscopic mass balance. The picture is that the fracture is constant in aperture, fluid is injected at a controlled rate at the near end, and the fracture unzips at the far end until the desired length is obtained. The velocity of the fluid is governed by Darcy's law with larger permeability for flow along the fracture length. Fracture growth is monitored through micro-seismicity. Since the fluid is assumed to be incompressible, the rate at which fluid is injected is balanced by rate of fracture growth and rate of loss to bounding rock. Minimizing injected fluid loss to the bounding rock is the same as minimizing total injected fluid How to change the injection rate so as to minimize the total injected fluid is a problem in optimal control. For a given total length, the variation of the injected rate is determined by variations in overall time needed to obtain the desired fracture length, the length at any time, and the rate at which the fracture is growing at that time. Optimal control theory leads to a boundary condition and an ordinary differential equation in time whose solution is an injection protocol that minimizes the fluid used under the stated assumptions. That method is to monitor the rate at which the square of the fracture length is growing and adjust the injection rate proportionately.

  6. Analysis of Geomagnetic Field Variations during Total Solar Eclipses Using INTERMAGNET Data

    Science.gov (United States)

    KIM, J. H.; Chang, H. Y.

    2017-12-01

    We investigate variations of the geomagnetic field observed by INTERMAGNET geomagnetic observatories over which the totality path passed during a solar eclipse. We compare results acquired by 6 geomagnetic observatories during the 4 total solar eclipses (11 August 1999, 1 August 2008, 11 July 2010, and 20 March 2015) in terms of geomagnetic and solar ecliptic parameters. These total solar eclipses are the only total solar eclipse during which the umbra of the moon swept an INTERMAGNET geomagnetic observatory and simultaneously variations of the geomagnetic field are recorded. We have confirmed previous studies that increase BY and decreases of BX, BZ and F are conspicuous. Interestingly, we have noted that variations of geomagnetic field components observed during the total solar eclipse at Isla de Pascua Mataveri (Easter Island) in Chile (IPM) in the southern hemisphere show distinct decrease of BY and increases of BX and BZ on the contrary. We have found, however, that variations of BX, BY, BZ and F observed at Hornsund in Norway (HRN) seem to be dominated by other geomagnetic occurrence. In addition, we have attempted to obtain any signatures of influence on the temporal behavior of the variation in the geomagnetic field signal during the solar eclipse by employing the wavelet analysis technique. Finally, we conclude by pointing out that despite apparent success a more sophisticate and reliable algorithm is required before implementing to make quantitative comparisons.

  7. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  8. Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2016-01-01

    Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.

  9. Process optimized minimally invasive total hip replacement

    Directory of Open Access Journals (Sweden)

    Philipp Gebel

    2012-02-01

    Full Text Available The purpose of this study was to analyse a new concept of using the the minimally invasive direct anterior approach (DAA in total hip replacement (THR in combination with the leg positioner (Rotex- Table and a modified retractor system (Condor. We evaluated retrospectively the first 100 primary THR operated with the new concept between 2009 and 2010, regarding operation data, radiological and clinical outcome (HOOS. All surgeries were perfomed in a standardized operation technique including navigation. The average age of the patients was 68 years (37 to 92 years, with a mean BMI of 26.5 (17 to 43. The mean time of surgery was 80 min. (55 to 130 min. The blood loss showed an average of 511.5 mL (200 to 1000 mL. No intra-operative complications occurred. The postoperative complication rate was 6%. The HOOS increased from 43 points pre-operatively to 90 (max 100 points 3 months after surgery. The radiological analysis showed an average cup inclination of 43° and a leg length discrepancy in a range of +/- 5 mm in 99%. The presented technique led to excellent clinic results, showed low complication rates and allowed correct implant positions although manpower was saved.

  10. Constrained variational calculus for higher order classical field theories

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn, E-mail: cedricmc@icmat.e, E-mail: mdeleon@icmat.e, E-mail: david.martin@icmat.e [Instituto de Ciencias Matematicas, CSIC-UAM-UC3M-UCM, Serrano 123, 28006 Madrid (Spain)

    2010-11-12

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  11. Constrained variational calculus for higher order classical field theories

    International Nuclear Information System (INIS)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn

    2010-01-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  12. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  13. Comparison of femoral neck fracture healing and affected limb pain after anterolateral-approach minimally invasive total hip replacement and hemiarthroplasty

    Directory of Open Access Journals (Sweden)

    Xiao-Dong Cao

    2017-04-01

    Full Text Available Objective: To study the differences in femoral neck fracture healing and affected limb pain after anterolateral-approach minimally invasive total hip replacement and hemiarthroplasty. Methods: A total of 92 patients with femoral neck fracture who received hip replacement in our hospital between May 2013 and December 2015 were selected and randomly divided into total hip and half hip group, total hip group received anterolateral-approach minimally invasive total hip replacement, half hip group received anterolateral-approach minimally invasive hemiarthroplasty, and 1 month after operation, serum was collected to detect the levels of bone metabolism markers, osteocyte cytokines, SP and CGRP. Results: 1 month after operation, serum PINP, PICP, BMP, TGF-β, FGF, IGF-I and IGF-II levels of total hip group were significantly higher than those of half hip group while TRAP5b and CatK levels were significantly lower than those of half hip group; the day after operation, serum pain media SP and CGRP levels were not significantly different between the two groups of patients; 36 h after operation, serum SP and CGRP levels of total hip group were significantly lower than those of half hip group. Conclusion: The bone metabolism after anterolateral-approach minimally invasive total hip replacement is better than that after hemiarthroplasty, and the degree of pain is less than that after hemiarthroplasty.

  14. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  15. Influence of Minimally Invasive Total Hip Replacement on Hip Reaction Forces and Their Orientations

    NARCIS (Netherlands)

    Weber, Tim; Al-Munajjed, Amir A.; Verkerke, Gijsbertus Jacob; Dendorfer, Sebastian; Renkawitz, Tobias

    2014-01-01

    Minimally invasive surgery (MIS) is becoming increasingly popular. Supporters claim that the main advantages of MIS total hip replacement (THR) are less pain and a faster rehabilitation and recovery. Critics claim that safety and efficacy of MIS are yet to be determined. We focused on a

  16. The stability of the femoral component of a minimal invasive total hip replacement system.

    NARCIS (Netherlands)

    Willems, M.M.M.; Kooloos, J.G.M.; Gibbons, P.; Minderhoud, N.; Weernink, T.; Verdonschot, N.J.J.

    2006-01-01

    In this study, the initial stability of the femoral component of a minimal invasive total hip replacement was biomechanically evaluated during simulated normal walking and chair rising. A 20 mm diameter canal was created in the femoral necks of five fresh frozen human cadaver bones and the femoral

  17. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.

  18. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  19. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan

    2013-12-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  20. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan; Strekalovskiy, Evgeny; Koetter, Sabrina; Cremers, Daniel

    2013-01-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  1. Variational Approach to the Orbital Stability of Standing Waves of the Gross-Pitaevskii Equation

    KAUST Repository

    Hadj Selem, Fouad

    2014-08-26

    This paper is concerned with the mathematical analysis of a masssubcritical nonlinear Schrödinger equation arising from fiber optic applications. We show the existence and symmetry of minimizers of the associated constrained variational problem. We also prove the orbital stability of such solutions referred to as standing waves and characterize the associated orbit. In the last section, we illustrate our results with few numerical simulations. © 2014 Springer Basel.

  2. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    Science.gov (United States)

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  3. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    Science.gov (United States)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  4. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  5. Microbiological variation amongst fresh and minimally processed vegetables from retail establishers - a public health study in Pakistan

    Directory of Open Access Journals (Sweden)

    Sair, A.T.

    2017-07-01

    Full Text Available Fresh and minimally processed ready to eat vegetables are very attractive eatables amongst consumers as convenient, healthy and readily available foods, especially in the South Asian states. They provide numerous nutrients, phytochemicals, and vitamins but also harbor extensive quantity of potentially pathogenic bacteria. The aim of this study was to determine microbiological variation amongst fresh vegetables that were commercially available to the public at numerous retail establishments in Pakistan in order to present an overview of the quality of fresh produce. A total of 133 samples, collected from local distributors and retailers were tested for aerobic mesophilic and psychrotrophic, coliform and yeast and mould counts. Standard plating techniques were used to analyze all samples. Mesophilic count ranged from 3.1 to 10.3 log CFU/g with lowest and highest counts observed in onions and fresh cut vegetables. Psychrotrophic microorganisms count was as high as mesophilic microorganisms. Maximum counts for coliform were found in fresh cut vegetables with 100% samples falling over 6 log CFU/g. These results were consistent with yeast and moulds as well. In our study, Escherichia coli was determined as an indicator organism for 133 samples of fresh and minimally processed vegetables. Fresh cut vegetables showed the highest incidence of presumptive E. coli (69.9%. The results showed a poor quality of fresh vegetables in Pakistan and point to the implementation of good hygiene practices and food safety awareness amongst local distributors, food handlers at retail establishments.

  6. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  7. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  8. Global Electric Circuit Implications of Total Current Measurements over Electrified Clouds

    Science.gov (United States)

    Mach, Douglas M.; Blakeslee, Richard J.; Bateman, Monte G.

    2009-01-01

    We determined total conduction (Wilson) currents and flash rates for 850 overflights of electrified clouds spanning regions including the Southeastern United States, the Western Atlantic Ocean, the Gulf of Mexico, Central America and adjacent oceans, Central Brazil, and the South Pacific. The overflights include storms over land and ocean, with and without lightning, and with positive and negative Wilson currents. We combined these individual storm overflight statistics with global diurnal lightning variation data from the Lightning Imaging Sensor (LIS) and Optical Transient Detector (OTD) to estimate the thunderstorm and electrified shower cloud contributions to the diurnal variation in the global electric circuit. The contributions to the global electric circuit from lightning producing clouds are estimated by taking the mean current per flash derived from the overflight data for land and ocean overflights and combining it with the global lightning rates (for land and ocean) and their diurnal variation derived from the LIS/OTD data. We estimate the contribution of non-lightning producing electrified clouds by assuming several different diurnal variations and total non-electrified storm counts to produce estimates of the total storm currents (lightning and non-lightning producing storms). The storm counts and diurnal variations are constrained so that the resultant total current diurnal variation equals the diurnal variation in the fair weather electric field (+/-15%). These assumptions, combined with the airborne and satellite data, suggest that the total mean current in the global electric circuit ranges from 2.0 to 2.7 kA, which is greater than estimates made by others using other methods.

  9. Application of the control variate technique to estimation of total sensitivity indices

    International Nuclear Information System (INIS)

    Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.

    2015-01-01

    Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula

  10. Variation in Use of Blood Transfusion in Primary Total Hip and Knee Arthroplasties.

    Science.gov (United States)

    Menendez, Mariano E; Lu, Na; Huybrechts, Krista F; Ring, David; Barnes, C Lowry; Ladha, Karim; Bateman, Brian T

    2016-12-01

    There is growing clinical and policy emphasis on minimizing transfusion use in elective joint arthroplasty, but little is known about the degree to which transfusion rates vary across US hospitals. This study aimed to assess hospital-level variation in use of allogeneic blood transfusion in patients undergoing elective joint arthroplasty and to characterize the extent to which variability is attributable to differences in patient and hospital characteristics. The study population included 228,316 patients undergoing total knee arthroplasty (TKA) at 922 hospitals and 88,081 patients undergoing total hip arthroplasty (THA) at 606 hospitals from January 1, 2009 to December 31, 2011 in the Nationwide Inpatient Sample database, a 20% stratified sample of US community hospitals. The median hospital transfusion rates were 11.0% (interquartile range, 3.5%-18.5%) in TKA and 15.9% (interquartile range, 5.4%-26.2%) in THA. After fully adjusting for patient- and hospital-related factors using mixed-effects logistic regression models, the average predicted probability of blood transfusion use in TKA was 6.3%, with 95% of the hospitals having a predicted probability between 0.37% and 55%. For THA, the average predicted probability of blood transfusion use was 9.5%, with 95% of the hospitals having a predicted probability between 0.57% and 66%. Hospital transfusion rates were inversely associated with hospital procedure volume and directly associated with length of stay. The use of blood transfusion in elective joint arthroplasty varied widely across US hospitals, largely independent of patient case-mix and hospital characteristics. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Towards Uniform Accelerometry Analysis: A Standardization Methodology to Minimize Measurement Bias Due to Systematic Accelerometer Wear-Time Variation

    Directory of Open Access Journals (Sweden)

    Tarun R. Katapally, Nazeem Muhajarine

    2014-06-01

    Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and

  12. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    Science.gov (United States)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  13. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  14. Scheduling a maintenance activity under skills constraints to minimize total weighted tardiness and late tasks

    Directory of Open Access Journals (Sweden)

    Djalal Hedjazi

    2015-04-01

    Full Text Available Skill management is a key factor in improving effectiveness of industrial companies, notably their maintenance services. The problem considered in this paper concerns scheduling of maintenance tasks under resource (maintenance teams constraints. This problem is generally known as unrelated parallel machine scheduling. We consider the problem with a both objectives of minimizing total weighted tardiness (TWT and number of tardiness tasks. Our interest is focused particularly on solving this problem under skill constraints, which each resource has a skill level. So, we propose a new efficient heuristic to obtain an approximate solution for this NP-hard problem and demonstrate his effectiveness through computational experiments. This heuristic is designed for implementation in a static maintenance scheduling problem (with unequal release dates, processing times and resource skills, while minimizing objective functions aforementioned.

  15. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  16. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  17. Solid hydrogen and deuterium. I. Ground-state energy calculated by a lowest order constrained-variation method

    International Nuclear Information System (INIS)

    Pettersen, G.; Oestgaard, E.

    1988-01-01

    The ground-state energy of solid hydrogen and deuterium is calculated by means of a modified variational lowest order constrained-variation (LOCV) method. Both fcc and hcp H 2 and D 2 are considered, and the calculations are done for five different two-body potentials. For solid H 2 we obtain theoretical results for the ground-state binding energy per particle from -74.9 K at an equilibrium particle density of 0.700 σ -3 or a molar volume of 22.3 cm 3 /mole to -91.3 K at a particle density of 0.725 σ -3 or a molar volume of 21.5 cm 3 /mole, where σ = 2.958 A. The corresponding experimental result is -92.3 K at a particle density of 0.688 σ -3 or a molar volume of 22.7 cm 3 /mole. For solid D 2 we obtain theoretical results for the ground-state binding energy per particle from -125.7 K at an equilibrium particle density of 0.830 σ -3 or a molar volume of 18.8 cm 3 /mole to -140.1 K at a particle density of 0.843 σ -3 or a molar volume of 18.5 cm 3 /mole. The corresponding experimental result is -137.9 K at a particle density of 0.797 σ -3 or a molar volume of 19.6 cm 3 /mole

  18. Minimal modification to tribimaximal mixing

    International Nuclear Information System (INIS)

    He Xiaogang; Zee, A.

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

  19. Calculus of variations

    CERN Document Server

    Elsgolc, Lev D

    2007-01-01

    This concise text offers both professionals and students an introduction to the fundamentals and standard methods of the calculus of variations. In addition to surveys of problems with fixed and movable boundaries, it explores highly practical direct methods for the solution of variational problems.Topics include the method of variation in problems with fixed boundaries; variational problems with movable boundaries and other problems; sufficiency conditions for an extremum; variational problems of constrained extrema; and direct methods of solving variational problems. Each chapter features nu

  20. Steady-state metabolite concentrations reflect a balance between maximizing enzyme efficiency and minimizing total metabolite load.

    Directory of Open Access Journals (Sweden)

    Naama Tepper

    Full Text Available Steady-state metabolite concentrations in a microorganism typically span several orders of magnitude. The underlying principles governing these concentrations remain poorly understood. Here, we hypothesize that observed variation can be explained in terms of a compromise between factors that favor minimizing metabolite pool sizes (e.g. limited solvent capacity and the need to effectively utilize existing enzymes. The latter requires adequate thermodynamic driving force in metabolic reactions so that forward flux substantially exceeds reverse flux. To test this hypothesis, we developed a method, metabolic tug-of-war (mTOW, which computes steady-state metabolite concentrations in microorganisms on a genome-scale. mTOW is shown to explain up to 55% of the observed variation in measured metabolite concentrations in E. coli and C. acetobutylicum across various growth media. Our approach, based strictly on first thermodynamic principles, is the first method that successfully predicts high-throughput metabolite concentration data in bacteria across conditions.

  1. Measuring total health inequality: adding individual variation to group-level differences

    Directory of Open Access Journals (Sweden)

    Gakidou Emmanuela

    2002-08-01

    Full Text Available Abstract Background Studies have revealed large variations in average health status across social, economic, and other groups. No study exists on the distribution of the risk of ill-health across individuals, either within groups or across all people in a society, and as such a crucial piece of total health inequality has been overlooked. Some of the reason for this neglect has been that the risk of death, which forms the basis for most measures, is impossible to observe directly and difficult to estimate. Methods We develop a measure of total health inequality – encompassing all inequalities among people in a society, including variation between and within groups – by adapting a beta-binomial regression model. We apply it to children under age two in 50 low- and middle-income countries. Our method has been adopted by the World Health Organization and is being implemented in surveys around the world; preliminary estimates have appeared in the World Health Report (2000. Results Countries with similar average child mortality differ considerably in total health inequality. Liberia and Mozambique have the largest inequalities in child survival, while Colombia, the Philippines and Kazakhstan have the lowest levels among the countries measured. Conclusions Total health inequality estimates should be routinely reported alongside average levels of health in populations and groups, as they reveal important policy-related information not otherwise knowable. This approach enables meaningful comparisons of inequality across countries and future analyses of the determinants of inequality.

  2. Total joint arthroplasty: practice variation of physiotherapy across the continuum of care in Alberta

    Directory of Open Access Journals (Sweden)

    C. Allyson Jones

    2016-11-01

    Full Text Available Abstract Background Comprehensive and timely rehabilitation for total joint arthroplasty (TJA is needed to maximize recovery from this elective surgical procedure for hip and knee arthritis. Administrative data do not capture the variation of treatment for rehabilitation across the continuum of care for TJA, so we conducted a survey for physiotherapists to report practice for TJA across the continuum of care. The primary objective was to describe the reported practice of physiotherapy for TJA across the continuum of care within the context of a provincial TJA clinical pathway and highlight possible gaps in care. Method A cross-sectional on-line survey was accessible to licensed physiotherapists in Alberta, Canada for 11 weeks. Physiotherapists who treated at least five patients with TJA annually were asked to complete the survey. The survey consisted of 58 questions grouped into pre-operative, acute care and post-acute rehabilitation. Variation of practice was described in terms of number, duration and type of visits along with goals of care and program delivery methods. Results Of the 80 respondents, 26 (33 % stated they worked in small centres or rural settings in Alberta with the remaining respondents working in two large urban sites. The primary treatment goal differed for each phase across the continuum of care in that pre-operative phase was directed at improving muscle strength, functional activities were commonly reported for acute care, and post-acute phase was directed at improving joint range-of-motion. Proportionally, more physiotherapists from rural areas treated patients in out-patient hospital departments (59 %, whereas a higher proportion in urban physiotherapists saw patients in private clinics (48 %. Across the continuum of care, treatment was primarily delivered on an individual basis rather than in a group format. Conclusions Variation of practice reported with pre-and post-operative care in the community will stimulate

  3. Total joint arthroplasty: practice variation of physiotherapy across the continuum of care in Alberta.

    Science.gov (United States)

    Jones, C Allyson; Martin, Ruben San; Westby, Marie D; Beaupre, Lauren A

    2016-11-04

    Comprehensive and timely rehabilitation for total joint arthroplasty (TJA) is needed to maximize recovery from this elective surgical procedure for hip and knee arthritis. Administrative data do not capture the variation of treatment for rehabilitation across the continuum of care for TJA, so we conducted a survey for physiotherapists to report practice for TJA across the continuum of care. The primary objective was to describe the reported practice of physiotherapy for TJA across the continuum of care within the context of a provincial TJA clinical pathway and highlight possible gaps in care. A cross-sectional on-line survey was accessible to licensed physiotherapists in Alberta, Canada for 11 weeks. Physiotherapists who treated at least five patients with TJA annually were asked to complete the survey. The survey consisted of 58 questions grouped into pre-operative, acute care and post-acute rehabilitation. Variation of practice was described in terms of number, duration and type of visits along with goals of care and program delivery methods. Of the 80 respondents, 26 (33 %) stated they worked in small centres or rural settings in Alberta with the remaining respondents working in two large urban sites. The primary treatment goal differed for each phase across the continuum of care in that pre-operative phase was directed at improving muscle strength, functional activities were commonly reported for acute care, and post-acute phase was directed at improving joint range-of-motion. Proportionally, more physiotherapists from rural areas treated patients in out-patient hospital departments (59 %), whereas a higher proportion in urban physiotherapists saw patients in private clinics (48 %). Across the continuum of care, treatment was primarily delivered on an individual basis rather than in a group format. Variation of practice reported with pre-and post-operative care in the community will stimulate dialogue within the profession as to what is the minimal

  4. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  5. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  6. Quasicanonical structure of optimal control in constrained discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2003-06-01

    This paper considers discrete processes governed by difference rather than differential equations for the state transformation. The basic question asked is if and when Hamiltonian canonical structures are possible in optimal discrete systems. Considering constrained discrete control, general optimization algorithms are derived that constitute suitable theoretical and computational tools when evaluating extremum properties of constrained physical models. The mathematical basis of the general theory is the Bellman method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage criterion which allows a variation of the terminal state that is otherwise fixed in the Bellman's method. Two relatively unknown, powerful optimization algorithms are obtained: an unconventional discrete formalism of optimization based on a Hamiltonian for multistage systems with unconstrained intervals of holdup time, and the time interval constrained extension of the formalism. These results are general; namely, one arrives at: the discrete canonical Hamilton equations, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory along with all basic results of variational calculus. Vast spectrum of applications of the theory is briefly discussed.

  7. Decision-Based Marginal Total Variation Diffusion for Impulsive Noise Removal in Color Images

    Directory of Open Access Journals (Sweden)

    Hongyao Deng

    2017-01-01

    Full Text Available Impulsive noise removal for color images usually employs vector median filter, switching median filter, the total variation L1 method, and variants. These approaches, however, often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A marginal method to reduce impulsive noise is proposed in this paper that overcomes this limitation that is based on the following facts: (i each channel in a color image is contaminated independently, and contaminative components are independent and identically distributed; (ii in a natural image the gradients of different components of a pixel are similar to one another. This method divides components into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the components are divided into the corrupted and the noise-free components; if the image is corrupted by random-valued impulses, the components are divided into the corrupted, noise-free, and the possibly corrupted components. Components falling into different categories are processed differently. If a component is corrupted, modified total variation diffusion is applied; if it is possibly corrupted, scaled total variation diffusion is applied; otherwise, the component is left unchanged. Simulation results demonstrate its effectiveness.

  8. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  9. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  10. Variational calculus with constraints on general algebroids

    Energy Technology Data Exchange (ETDEWEB)

    Grabowska, Katarzyna [Physics Department, Division of Mathematical Methods in Physics, University of Warsaw, Hoza 69, 00-681 Warszawa (Poland); Grabowski, Janusz [Institute of Mathematics, Polish Academy of Sciences, Sniadeckich 8, PO Box 21, 00-956 Warszawa (Poland)], E-mail: konieczn@fuw.edu.pl, E-mail: jagrab@impan.gov.pl

    2008-05-02

    Variational calculus on a vector bundle E equipped with a structure of a general algebroid is developed, together with the corresponding analogs of Euler-Lagrange equations. Constrained systems are introduced in the variational and geometrical settings. The constrained Euler-Lagrange equations are derived for analogs of holonomic, vakonomic and nonholonomic constraints. This general model covers the majority of first-order Lagrangian systems which are present in the literature and reduces to the standard variational calculus and the Euler-Lagrange equations in classical mechanics for E = TM.

  11. Variational calculus with constraints on general algebroids

    International Nuclear Information System (INIS)

    Grabowska, Katarzyna; Grabowski, Janusz

    2008-01-01

    Variational calculus on a vector bundle E equipped with a structure of a general algebroid is developed, together with the corresponding analogs of Euler-Lagrange equations. Constrained systems are introduced in the variational and geometrical settings. The constrained Euler-Lagrange equations are derived for analogs of holonomic, vakonomic and nonholonomic constraints. This general model covers the majority of first-order Lagrangian systems which are present in the literature and reduces to the standard variational calculus and the Euler-Lagrange equations in classical mechanics for E = TM

  12. Sparseness- and continuity-constrained seismic imaging

    Science.gov (United States)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  13. The variation of the fine-structure constant from disformal couplings

    Energy Technology Data Exchange (ETDEWEB)

    De Bruck, Carsten van; Mifsud, Jurgen [Consortium for Fundamental Physics, School of Mathematics and Statistics, University of Sheffield, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Nunes, Nelson J., E-mail: c.vandebruck@sheffield.ac.uk, E-mail: jmifsud1@sheffield.ac.uk, E-mail: njnunes@fc.ul.pt [Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, PT1749-016 Lisboa (Portugal)

    2015-12-01

    We study a theory in which the electromagnetic field is disformally coupled to a scalar field, in addition to a usual non-minimal electromagnetic coupling. We show that disformal couplings modify the expression for the fine-structure constant, α. As a result, the theory we consider can explain the non-zero reported variation in the evolution of α by purely considering disformal couplings. We also find that if matter and photons are coupled in the same way to the scalar field, disformal couplings itself do not lead to a variation of the fine-structure constant. A number of scenarios are discussed consistent with the current astrophysical, geochemical, laboratory and the cosmic microwave background radiation constraints on the cosmological evolution of α. The models presented are also consistent with the current type Ia supernovae constraints on the effective dark energy equation of state. We find that the Oklo bound in particular puts strong constraints on the model parameters. From our numerical results, we find that the introduction of a non-minimal electromagnetic coupling enhances the cosmological variation in α. Better constrained data is expected to be reported by ALMA and with the forthcoming generation of high-resolution ultra-stable spectrographs such as PEPSI, ESPRESSO, and ELT-HIRES. Furthermore, an expected increase in the sensitivity of molecular and nuclear clocks will put a more stringent constraint on the theory.

  14. The variation of the fine-structure constant from disformal couplings

    International Nuclear Information System (INIS)

    De Bruck, Carsten van; Mifsud, Jurgen; Nunes, Nelson J.

    2015-01-01

    We study a theory in which the electromagnetic field is disformally coupled to a scalar field, in addition to a usual non-minimal electromagnetic coupling. We show that disformal couplings modify the expression for the fine-structure constant, α. As a result, the theory we consider can explain the non-zero reported variation in the evolution of α by purely considering disformal couplings. We also find that if matter and photons are coupled in the same way to the scalar field, disformal couplings itself do not lead to a variation of the fine-structure constant. A number of scenarios are discussed consistent with the current astrophysical, geochemical, laboratory and the cosmic microwave background radiation constraints on the cosmological evolution of α. The models presented are also consistent with the current type Ia supernovae constraints on the effective dark energy equation of state. We find that the Oklo bound in particular puts strong constraints on the model parameters. From our numerical results, we find that the introduction of a non-minimal electromagnetic coupling enhances the cosmological variation in α. Better constrained data is expected to be reported by ALMA and with the forthcoming generation of high-resolution ultra-stable spectrographs such as PEPSI, ESPRESSO, and ELT-HIRES. Furthermore, an expected increase in the sensitivity of molecular and nuclear clocks will put a more stringent constraint on the theory

  15. Variation in age and physical status prior to total knee and hip replacement surgery

    DEFF Research Database (Denmark)

    Ackerman, Ilana N; Dieppe, Paul A; March, Lyn M

    2009-01-01

    OBJECTIVE: To investigate whether variation exists in the preoperative age, pain, stiffness, and physical function of people undergoing total knee replacement (TKR) and total hip replacement (THR) at several centers in Australia and Europe. METHODS: Individual Western Ontario and McMaster Univers...

  16. Total abdominal hysterectomy versus minimal-invasive hysterectomy: a systemic review and meta-analysis

    International Nuclear Information System (INIS)

    Aragon Palmero, Felipe Jorge; Exposito Exposito, Moises

    2011-01-01

    INTRODUCTION. At the present time three types of hysterectomies are used: the vaginal hysterectomy and the minimal-invasive hysterectomy (MIH). The objective of present research was to compare the MIH and the total abdominal hysterectomy (TAH) in women presenting with benign uterine diseases. METHODS. A systemic review was made and a meta-analysis from the following databases: MEDLINE, EBSCO HOST AND The Cochrane Central Register of Controlled Trials. Only the controlled and randomized studies were selected. The data of all studies were combined and also the relative risk (RR) with a 95% CI was used with the Mantel-Haenszel method as an effect measure for dichotomy variables. For the analysis of continuing variables the mean difference was used. In all the comparisons performed the results were obtained with the fix effect and randomized forms. RESULTS. A total of 53 transoperative complications were registered in the MIH hysterectomy versus 17 in the TAH group (RR: 1,78; 95% CI: 1,04-3.05). Postoperative complications evolved in a similar way in both groups without significant differences from the statistical point of view. The blood losses, the hospital stay and the patient's reincorporation to usual and work activities were lesser in the laparoscopy group; however, the operative time is higher when it is compared with TAH (mean difference: 37,36; 95% CI: 34,36-39,93). CONCLUSIONS. Both techniques have advantages and disadvantages. The indication of MIH must to be individualized according to the clinical situation of each patient and these not to be performed in those centers without a properly trained surgical staff and with experience in advanced minimal invasive surgery. (author)

  17. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  18. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  19. Screening test recommendations for methicillin-resistant Staphylococcus aureus surveillance practices: A cost-minimization analysis.

    Science.gov (United States)

    Whittington, Melanie D; Curtis, Donna J; Atherly, Adam J; Bradley, Cathy J; Lindrooth, Richard C; Campbell, Jonathan D

    2017-07-01

    To mitigate methicillin-resistant Staphylococcus aureus (MRSA) infections, intensive care units (ICUs) conduct surveillance through screening patients upon admission followed by adhering to isolation precautions. Two surveillance approaches commonly implemented are universal preemptive isolation and targeted isolation of only MRSA-positive patients. Decision analysis was used to calculate the total cost of universal preemptive isolation and targeted isolation. The screening test used as part of the surveillance practice was varied to identify which screening test minimized inappropriate and total costs. A probabilistic sensitivity analysis was conducted to evaluate the range of total costs resulting from variation in inputs. The total cost of the universal preemptive isolation surveillance practice was minimized when a polymerase chain reaction screening test was used ($82.51 per patient). Costs were $207.60 more per patient when a conventional culture was used due to the longer turnaround time and thus higher isolation costs. The total cost of the targeted isolation surveillance practice was minimized when chromogenic agar 24-hour testing was used ($8.54 per patient). Costs were $22.41 more per patient when polymerase chain reaction was used. For ICUs that preemptively isolate all patients, the use of a polymerase chain reaction screening test is recommended because it can minimize total costs by reducing inappropriate isolation costs. For ICUs that only isolate MRSA-positive patients, the use of chromogenic agar 24-hour testing is recommended to minimize total costs. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  20. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them ...

  1. Elemental Spatiotemporal Variations of Total Suspended Particles in Jeddah City

    OpenAIRE

    Kadi, Mohammad W.

    2014-01-01

    Elements associated with total suspended particulate matter (TSP) in Jeddah city were determined. Using high-volume samplers, TSP samples were simultaneously collected over a one-year period from seven sampling sites. Samples were analyzed for Al, Ba, Ca, Cu, Mg, Fe, Mn, Zn, Ti, V, Cr, Co, Ni, As, and Sr. Results revealed great dependence of element contents on spatial and temporal variations. Two sites characterized by busy roads, workshops, heavy population, and heavy trucking have high lev...

  2. TOKMINA, Toroidal Magnetic Field Minimization for Tokamak Fusion Reactor. TOKMINA-2, Total Power for Tokamak Fusion Reactor

    International Nuclear Information System (INIS)

    Hatch, A.J.

    1975-01-01

    1 - Description of problem or function: TOKMINA finds the minimum magnetic field, Bm, required at the toroidal coil of a Tokamak type fusion reactor when the input is beta(ratio of plasma pressure to magnetic pressure), q(Kruskal-Shafranov plasma stability factor), and y(ratio of plasma radius to vacuum wall radius: rp/rw) and arrays of PT (total thermal power from both d-t and tritium breeding reactions), Pw (wall loading or power flux) and TB (thickness of blanket), following the method of Golovin, et al. TOKMINA2 finds the total power, PT, of such a fusion reactor, given a specified magnetic field, Bm, at the toroidal coil. 2 - Method of solution: TOKMINA: the aspect ratio(a) is minimized, giving a minimum value for Bm. TOKMINA2: a search is made for PT; the value of PT which minimizes Bm to the required value within 50 Gauss is chosen. 3 - Restrictions on the complexity of the problem: Input arrays presently are dimensioned at 20. This restriction can be overcome by changing a dimension card

  3. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  4. Configuration mixing calculations with basis states obtained from constrained variational methods

    International Nuclear Information System (INIS)

    Miller, H.G.; Schroeder, H.P.

    1982-01-01

    Configuration mixing calculations have been performed in 20 Ne using basis states which are energetically the lowest-lying solutions of the constrained Hartree-Fock equations with an angular momentum constraint of the form 2 > = J(J + 1), For J = 6, very good agreement with the lower-lying 6 + states in an exact eigenvalue spectrum has been obtained with relatively few PAV-K mixed CHF basis states. (orig.)

  5. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    Science.gov (United States)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  6. Variation in the cost of care for primary total knee arthroplasties.

    Science.gov (United States)

    Haas, Derek A; Kaplan, Robert S

    2017-03-01

    The study examined the cost variation across 29 high-volume US hospitals and their affiliated orthopaedic surgeons for delivering a primary total knee arthroplasty without major complicating conditions. The hospitals had similar patient demographics, and more than 80% of them had statistically-similar Medicare risk-adjusted readmission and complication rates. Hospital and physician personnel costs were calculated using time-driven activity-based costing. Consumable supply costs, such as the prosthetic implant, were calculated using purchase prices, and postacute care costs were measured using either internal costs or external claims as reported by each hospital. Despite having similar patient demographics and readmission and complication rates, the average cost of care for total knee arthroplasty across the hospitals varied by a factor of about 2 to 1. Even after adjusting for differences in internal labor cost rates, the hospital at the 90th percentile of cost spent about twice as much as the one at the 10th percentile of cost. The large variation in costs among sites suggests major and multiple opportunities to transfer knowledge about process and productivity improvements that lower costs while simultaneously maintaining or improving outcomes.

  7. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  8. Variation in Differential and Total Cross Sections Due to Different Radial Wave Functions

    Science.gov (United States)

    Williamson, W., Jr.; Greene, T.

    1976-01-01

    Three sets of analytical wave functions are used to calculate the Na (3s---3p) transition differential and total electron excitation cross sections by Born approximations. Results show expected large variations in values. (Author/CP)

  9. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  10. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  11. Image Restoration Based on the Hybrid Total-Variation-Type Model

    OpenAIRE

    Shi, Baoli; Pang, Zhi-Feng; Yang, Yu-Fei

    2012-01-01

    We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two ${L}^{1}$ -norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM) to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method call...

  12. A Primal-Dual Approach for a Total Variation Wasserstein Flow

    KAUST Repository

    Benning, Martin; Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane

    2013-01-01

    We consider a nonlinear fourth-order diffusion equation that arises in denoising of image densities. We propose an implicit time-stepping scheme that employs a primal-dual method for computing the subgradient of the total variation seminorm. The constraint on the dual variable is relaxed by adding a penalty term, depending on a parameter that determines the weight of the penalisation. The paper is furnished with some numerical examples showing the denoising properties of the model considered. © 2013 Springer-Verlag.

  13. Enhancing the efficiency of constrained dual-hop variable-gain AF relaying under nakagami-m fading

    KAUST Repository

    Zafar, Ammar

    2014-07-01

    This paper studies power allocation for performance constrained dual-hop variable-gain amplify-and-forward (AF) relay networks in Nakagami- $m$ fading. In this context, the performance constraint is formulated as a constraint on the end-to-end signal-to-noise-ratio (SNR) and the overall power consumed is minimized while maintaining this constraint. This problem is considered under two different assumptions of the available channel state information (CSI) at the relays, namely full CSI at the relays and partial CSI at the relays. In addition to the power minimization problem, we also consider the end-to-end SNR maximization problem under a total power constraint for the partial CSI case. We provide closed-form solutions for all the problems which are easy to implement except in two cases, namely selective relaying with partial CSI for power minimization and SNR maximization, where we give the solution in the form of a one-variable equation which can be solved efficiently. Numerical results are then provided to characterize the performance of the proposed power allocation algorithms considering the effects of channel parameters and CSI availability. © 2014 IEEE.

  14. Regional variation in acute care length of stay after orthopaedic surgery total joint replacement surgery and hip fracture surgery.

    Science.gov (United States)

    Fitzgerald, John D; Weng, Haoling H; Soohoo, Nelson F; Ettner, Susan L

    2013-01-01

    To examine change in regional variations in acute care length of stay (LOS) after orthopedic surgery following expiration of the New York (NY) State exemption to the Prospective Payment System and implementation of the Medicare Short Stay Transfer Policy. Time series analyses were conducted to evaluate change in LOS across regions after policy implementations. Small area analyses were conducted to examine residual variation in LOS. The dataset included A 100% sample of fee-for-service Medicare patients undergoing surgical repair for hip fracture or elective joint replacement surgery between 1996 and 2001. Data files from Centers for Medicare and Medicaid Services 1996-2001 Medicare Provider Analysis and Review file, 1999 Provider of Service file, and data from the 2000 United States Census were used for analysis. In 1996, LOS in NY after orthopedic procedures was much longer than the remainder of the country. After policy changes, LOS fell. However, significant residual variation in LOS persisted. This residual variation was likely partly explained by differences variation in regional managed care market penetration, patient management practices and unmeasured characteristics associated with the hospital location. NY hospitals responded to changes in reimbursement policy, reducing variation in LOS. However, even after 5 years of financial pressure to constrain costs, other factors still have a strong impact on delivery of patient care.

  15. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  16. Despeckling Polsar Images Based on Relative Total Variation Model

    Science.gov (United States)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  17. What are the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty?

    Science.gov (United States)

    Harato, Kengo; Maeno, Shinichi; Tanikawa, Hidenori; Kaneda, Kazuya; Morishige, Yutaro; Nomoto, So; Niki, Yasuo

    2016-08-01

    It was hypothesized that surgical time of beginners would be much longer than that of experts. Our purpose was to investigate and clarify the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty (TKA) as a multicentre study. A total of 300 knees in 248 patients (averaged 74.6 years) were enrolled. All TKAs were done using the same instruments and the same measured resection technique at 14 facilities by 25 orthopaedic surgeons. Surgeons were divided into three surgeon groups (four experts, nine medium-volume surgeons and 12 beginners). The surgical technique was divided into five phases. Detailed surgical time and ratio of the time in each phase to overall surgical time were recorded and compared among the groups in each phase. A total of 62, 119, and 119 TKAs were done by beginners, medium-volume surgeons, and experts, respectively. Significant differences in surgical time among the groups were seen in each phase. Concerning the ratio of the time, experts and medium-volume surgeons seemed cautious in fixation of the permanent component compared to other phases. Interestingly, even in ratio, beginners and medium-volume surgeons took more time in exposure of soft tissue compared to experts. (0.14 in beginners, 0.13 in medium-volume surgeons, 0.11 in experts, P time in exposure and closure of soft tissue compared to experts. Improvement in basic technique is essential to minimize surgical time among beginners. First of all, surgical instructors should teach basic techniques in primary TKA for beginners. Therapeutic studies, Level IV.

  18. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  19. A study of the one dimensional total generalised variation regularisation problem

    KAUST Repository

    Papafitsoros, Konstantinos

    2015-03-01

    © 2015 American Institute of Mathematical Sciences. In this paper we study the one dimensional second order total generalised variation regularisation (TGV) problem with L2 data fitting term. We examine the properties of this model and we calculate exact solutions using simple piecewise affine functions as data terms. We investigate how these solutions behave with respect to the TGV parameters and we verify our results using numerical experiments.

  20. A study of the one dimensional total generalised variation regularisation problem

    KAUST Repository

    Papafitsoros, Konstantinos; Bredies, Kristian

    2015-01-01

    © 2015 American Institute of Mathematical Sciences. In this paper we study the one dimensional second order total generalised variation regularisation (TGV) problem with L2 data fitting term. We examine the properties of this model and we calculate exact solutions using simple piecewise affine functions as data terms. We investigate how these solutions behave with respect to the TGV parameters and we verify our results using numerical experiments.

  1. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  2. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  3. Multivariable controller for discrete stochastic amplitude-constrained systems

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1983-04-01

    Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.

  4. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  5. Thromboprophylaxis after minimally invasive total knee arthroplasty: A comparison of rivaroxaban and enoxaparin

    Directory of Open Access Journals (Sweden)

    Shih-Hsiang Yen

    2014-08-01

    Full Text Available Background: Total knee arthroplasty (TKA carries a substantial rate of venous thromboembolism (VTE. The blood-saving of effect of tranexamic acid (TEA in TKA using enoxaparin for thromboprophylaxis has been well known. However, the routine use of chemoprophylaxis in TKA remains controversial because of postoperative bleeding complications. Therefore, the purpose of this study was to retrospectively compare the incidence of VTE, and postoperative blood loss and wound-related complications in minimally invasive (MIS-TKA patients who received rivaroxaban or enoxaparin prophylaxis. Methods: A total of 113 patients who underwent primary unilateral MIS-TKA between 2009 and 2012 were studied. Of these, 61 patients (study group received rivaroxaban prophylaxis between 2011 and 2012 and a control group of 52 patients received enoxaparin prophylaxis between 2009 and 2010. All patients received one intraoperative injection of TEA (10 mg/kg. We compared the changes in hemoglobin (Hb level, postoperative drainage amount, total blood loss, transfusion rate, and incidence of postoperative wound complications and VTE between the two groups. Results: No differences in postoperative Hb levels, blood drainage amount, total blood loss, and transfusion rate were observed between the two groups. No deep-vein thrombosis of the leg or pulmonary embolism was noted in both groups. There were no major wound complications including hematoma and infection requiring surgical intervention for open irrigation or debridement. Conclusions: Our retrospective study demonstrated a low rate of VTE in MIS-TKA patients who received rivaroxaban or enoxaparin when TEA was used for bleeding prophylaxis. No increased perioperative bleeding or postoperative wound-related complications were observed in the rivaroxaban group compared with the enoxaparin group

  6. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  7. Geometric Measure Theory and Minimal Surfaces

    CERN Document Server

    Bombieri, Enrico

    2011-01-01

    W.K. ALLARD: On the first variation of area and generalized mean curvature.- F.J. ALMGREN Jr.: Geometric measure theory and elliptic variational problems.- E. GIUSTI: Minimal surfaces with obstacles.- J. GUCKENHEIMER: Singularities in soap-bubble-like and soap-film-like surfaces.- D. KINDERLEHRER: The analyticity of the coincidence set in variational inequalities.- M. MIRANDA: Boundaries of Caciopoli sets in the calculus of variations.- L. PICCININI: De Giorgi's measure and thin obstacles.

  8. Geometric Total Variation for Texture Deformation

    DEFF Research Database (Denmark)

    Bespalov, Dmitriy; Dahl, Anders Lindbjerg; Shokoufandeh, Ali

    2010-01-01

    In this work we propose a novel variational method that we intend to use for estimating non-rigid texture deformation. The method is able to capture variation in grayscale images with respect to the geometry of its features. Our experimental evaluations demonstrate that accounting for geometry...... of features in texture images leads to significant improvements in localization of these features, when textures undergo geometrical transformations. Accurate localization of features in the presense of unkown deformations is a crucial property for texture characterization methods, and we intend to expoit...

  9. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-05-01

    Full Text Available Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN, where the observed noisy image f is related to the underlying true image u by f=u+η and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.

  10. An analysis of cross-sectional variations in total household energy requirements in India using micro survey data

    International Nuclear Information System (INIS)

    Pachauri, Shonali

    2004-01-01

    Using micro level household survey data from India, we analyse the variation in the pattern and quantum of household energy requirements, both direct and indirect, and the factors causing such variation. An econometric analysis using household survey data from India for the year 1993-1994 reveals that household socio-economic, demographic, geographic, family and dwelling attributes influence the total household energy requirements. There are also large variations in the pattern of energy requirements across households belonging to different expenditure classes. Results from the econometric estimation show that total household expenditure or income level is the most important explanatory variable causing variation in energy requirements across households. In addition, the size of the household dwelling and the age of the head of the household are related to higher household energy requirements. In contrast, the number of members in the household and literacy of the head are associated with lower household energy requirements

  11. An analysis of cross-sectional variations in total household energy requirements in India using micro survey data

    Energy Technology Data Exchange (ETDEWEB)

    Pachauri, Shonali E-mail: shonali.pachauri@cepe.mavt.ethz.ch

    2004-10-01

    Using micro level household survey data from India, we analyse the variation in the pattern and quantum of household energy requirements, both direct and indirect, and the factors causing such variation. An econometric analysis using household survey data from India for the year 1993-1994 reveals that household socio-economic, demographic, geographic, family and dwelling attributes influence the total household energy requirements. There are also large variations in the pattern of energy requirements across households belonging to different expenditure classes. Results from the econometric estimation show that total household expenditure or income level is the most important explanatory variable causing variation in energy requirements across households. In addition, the size of the household dwelling and the age of the head of the household are related to higher household energy requirements. In contrast, the number of members in the household and literacy of the head are associated with lower household energy requirements.

  12. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  13. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  14. Bayesian Image Restoration Using a Large-Scale Total Patch Variation Prior

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2011-01-01

    Full Text Available Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA. Experiments with both simulated and real data validate the good performance of the proposed restoration.

  15. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  16. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    Science.gov (United States)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  17. Solid hydrogen and deuterium. II. Pressure and compressibility calculated by a lowest-order constrained-variation method

    International Nuclear Information System (INIS)

    Pettersen, G.; Ostgaard, E.

    1988-01-01

    The pressure and the compressibility of solid H 2 and D 2 are obtained from ground-state energies calculated by means of a modified variational lowest order constrained-variation (LOCV) method. Both fcc and hcp structures are considered, but results are given for the fcc structure only. The pressure and the compressibility are calculated or estimated from the dependence of the ground-state energy on density or molar volume, generally in a density region of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 12-24 cm 3 mole, where σ = 2.958 angstrom, and the calculations are done for five different two-body potentials. Theoretical results for the pressure are 340-460 atm for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 370-490 atm for solid 4 He at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 /mole. The corresponding experimental results are 650 and 700 atm, respectively. Theoretical results for the compressibility are 210 times 10 -6 to 260 times 10 -6 atm -1 for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 150 times 10 -6 to 180 times 10 -6 atm -1 for solid D 2 at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 mole. The corresponding experimental results are 180 times 10 -6 and 140 times 10 -6 atm -1 , respectively. The agreement with experimental results is better for higher densities

  18. The inverse problem of the calculus of variations for discrete systems

    Science.gov (United States)

    Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David

    2018-05-01

    We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.

  19. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  20. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  1. Adaptive Second-Order Total Variation: An Approach Aware of Slope Discontinuities

    KAUST Repository

    Lenzen, Frank; Becker, Florian; Lellmann, Jan

    2013-01-01

    Total variation (TV) regularization, originally introduced by Rudin, Osher and Fatemi in the context of image denoising, has become widely used in the field of inverse problems. Two major directions of modifications of the original approach were proposed later on. The first concerns adaptive variants of TV regularization, the second focuses on higher-order TV models. In the present paper, we combine the ideas of both directions by proposing adaptive second-order TV models, including one anisotropic model. Experiments demonstrate that introducing adaptivity results in an improvement of the reconstruction error. © 2013 Springer-Verlag.

  2. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  4. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  5. CT reconstruction from few-views with anisotropic edge-guided total variance

    International Nuclear Information System (INIS)

    Rong, Junyan; Liu, Wenlei; Gao, Peng; Liao, Qimei; Jiao, Chun; Ma, Jianhua; Lu, Hongbing

    2016-01-01

    To overcome the oversmoothing drawback in the edge areas when reconstructing few-view CT with total variation (TV) minimization, in this paper, we propose an anisotropic edge-guided TV minimization framework for few-view CT reconstruction. In the framework, anisotropic TV is summed with pre-weighted image gradient and then used as the object function for minimizing. It includes edge-guided TV minimization (EGTV) and edge-guided adaptive-weighted TV minimization (EGAwTV) algorithms. For EGTV algorithm, the weights of the TV discretization term are updated by anisotropic edge information detected from the image, whereas the weights for EGAwTV are determined based on edge information and local image-intensity gradients. To solve the minimization problem of the proposed algorithm, a similar TV-based minimization implementation is developed to address the raw data fidelity and other constraints. The evaluation results using both computer simulations with the Shepp-Logan phantom and experimental data from a physical phantom demonstrate that the proposed algorithms exhibit noticeable gains in the merits of spatial resolution compared with the conventional TV and other modified TV algorithms.

  6. Low-lying excited states by constrained DFT

    Science.gov (United States)

    Ramos, Pablo; Pavanello, Michele

    2018-04-01

    Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, Nc, in the virtual space of a reference set of occupied orbitals. By imposing this population to be Nc = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H3) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.

  7. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  8. Image Restoration Based on the Hybrid Total-Variation-Type Model

    Directory of Open Access Journals (Sweden)

    Baoli Shi

    2012-01-01

    Full Text Available We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two L1-norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method called the proximal point method (PPM is proposed and the convergence of the proposed method is proved. Some numerical results demonstrate the viability and efficiency of the proposed model and methods.

  9. Exciting times: Towards a totally minimally invasive paediatric urology service

    OpenAIRE

    Lazarus, John

    2011-01-01

    Following on from the first paediatric laparoscopic nephrectomy in 1992, the growth of minimally invasive ablative and reconstructive procedures in paediatric urology has been dramatic. This article reviews the literature related to laparoscopic dismembered pyeloplasty, optimising posterior urethral valve ablation and intravesical laparoscopic ureteric reimplantation.

  10. A RSSI-based parameter tracking strategy for constrained position localization

    Science.gov (United States)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  11. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  12. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals.

    Science.gov (United States)

    Woods, Thomas N; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

    A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

  13. Stringent tests of constrained Minimal Flavor Violation through ΔF=2 transitions

    International Nuclear Information System (INIS)

    Buras, Andrzej J.; Girrbach, Jennifer

    2013-01-01

    New Physics contributions to ΔF=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S 0 (x t ). With already very precise experimental values of ε K , ΔM d , ΔM s and precise values of the CP-asymmetry S ψK S and of B K entering the evaluation of ε K , the future of CMFV in the ΔF = 2 sector depends crucially on the values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, γ, F B s √(B B s ) and F B d √(B B d ). The ratio ξ of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with ΔM s / ΔM d and S ψK S to determine the range of the angle γ in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke ε K vertical stroke and ΔM d allows to determine the favorite CMFV values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, F B s √(B B s ) and F B d √(B B d ) as functions of S(v) and γ. The vertical stroke V cb vertical stroke 4 dependence of ε K allows to determine vertical stroke V cb vertical stroke for a given S(v) and γ with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V ub vertical stroke, vertical stroke V td vertical stroke and vertical stroke V ts vertical stroke that are automatically determined as functions of S(v) and γ. We derive correlations between F B s √(B B s ) and F B d √(B B d ), vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke and γ that should be tested in the coming years. Typically F B s √(B B s ) and F B d √(B B d ) have to be lower than their present lattice values, while vertical stroke V cb vertical stroke has to

  14. Glycemic Variation in Tumor Patients with Total Parenteral Nutrition

    Directory of Open Access Journals (Sweden)

    Jin-Cheng Yang

    2015-01-01

    Full Text Available Background: Hyperglycemia is associated with poor clinical outcomes and mortality in several patients. However, studies evaluating hyperglycemia variation in tumor patients receiving total parenteral nutrition (TPN are scarce. The aim of this study was to assess the relationship between glycemia and tumor kinds with TPN by monitoring glycemic variation in tumor patients. Methods: This retrospective clinical trial selected 312 patients with various cancer types, whose unique nutrition treatment was TPN during the monitoring period. All patients had blood glucose (BG values assessed at least six times daily during the TPN infusion. The glycemic variation before and after TPN was set as the indicator to evaluate the factors influencing BG. Results: The clinical trial lasted 7.5 ± 3.0 days adjusted for age, gender, family cancer history and blood types. There were six cancer types: Hepatic carcinoma (HC, 21.8%, rectal carcinoma (17.3%, colon carcinoma (CC, 14.7%, gastric carcinoma (29.8%, pancreatic carcinoma (11.5%, and duodenal carcinoma (DC, 4.8%. The patients were divided into diabetes and nondiabetes groups. No statistical differences in TPN glucose content between diabetes and nondiabetes groups were found; however, the tumor types affected by BG values were obvious. With increasing BG values, DC, HC and CC were more represented than other tumor types in this sequence in diabetic individuals, as well as in the nondiabetic group. BG was inclined to be more easily influenced in the nondiabetes group. Other factors did not impact BG values, including gender, body mass index, and TPN infusion duration time. Conclusions: When tumor patients are treated with TPN, BG levels should be monitored according to different types of tumors, besides differentiating diabetes or nondiabetes patients. Special BG control is needed for DC, HC and CC in both diabetic and nondiabetic patients. If BG overtly increases, positive measurements are needed to control BG

  15. A METHOD OF THE MINIMIZING OF THE TOTAL ACQUISITIONS COST WITH THE INCREASING VARIABLE DEMAND

    Directory of Open Access Journals (Sweden)

    ELEONORA IONELA FOCȘAN

    2015-12-01

    Full Text Available Over time, mankind has tried to find different ways of costs reduction. This subject which we are facing more often nowadays, has been detailed studied, without reaching a general model, and also efficient, regarding the costs reduction. Costs reduction entails a number of benefits over the entity, the most important being: increase revenue and default to the profit, increase productivity, a higher level of services / products offered to clients, and last but not least, the risk mitigation of the economic deficit. Therefore, each entity search different modes to obtain most benefits, for the company to succeed in a competitive market. This article supports the companies, trying to make known a new way of minimizing the total cost of acquisitions, by presenting some hypotheses about the increasing variable demand, proving them, and development of formulas for reducing the costs. The hypotheses presented in the model described below, can be maximally exploited to obtain new models of reducing the total cost, according to the modes of the purchase of entities which approach it.

  16. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  17. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  18. Integrated batch production and maintenance scheduling for multiple items processed on a deteriorating machine to minimize total production and maintenance costs with due date constraint

    Directory of Open Access Journals (Sweden)

    Zahedi Zahedi

    2016-04-01

    Full Text Available This paper discusses an integrated model of batch production and maintenance scheduling on a deteriorating machine producing multiple items to be delivered at a common due date. The model describes the trade-off between total inventory cost and maintenance cost as the increase of production run length. The production run length is a time bucket between two consecutive preventive maintenance activities. The objective function of the model is to minimize total cost consisting of in process and completed part inventory costs, setup cost, preventive and corrective maintenance costs and rework cost. The problem is to determine the optimal production run length and to schedule the batches obtained from determining the production run length in order to minimize total cost.

  19. Spiral phyllotaxis underlies constrained variation in Anemone (Ranunculaceae) tepal arrangement.

    Science.gov (United States)

    Kitazawa, Miho S; Fujimoto, Koichi

    2018-05-01

    Stabilization and variation of floral structures are indispensable for plant reproduction and evolution; however, the developmental mechanism regulating their structural robustness is largely unknown. To investigate this mechanism, we examined positional arrangement (aestivation) of excessively produced perianth organs (tepals) of six- and seven-tepaled (lobed) flowers in six Anemone species (Ranunculaceae). We found that the tepal arrangement that occurred in nature varied intraspecifically between spiral and whorled arrangements. Moreover, among the studied species, variation was commonly limited to three types, including whorls, despite five geometrically possible arrangements in six-tepaled flowers and two types among six possibilities in seven-tepaled flowers. A spiral arrangement, on the other hand, was unique to five-tepaled flowers. A spiral phyllotaxis model with stochasticity on initiating excessive primordia accounted for these limited variations in arrangement in cases when the divergence angle between preexisting primordia was less than 144°. Moreover, interspecific differences in the frequency of the observed arrangements were explained by the change of model parameters that represent meristematic growth and differential organ growth. These findings suggest that the phyllotaxis parameters are responsible for not only intraspecific stability but interspecific difference of floral structure. Decreasing arrangements from six-tepaled to seven-tepaled Anemone flowers demonstrate that the stabilization occurs as development proceeds to increase the component (organ) number, in contrast from the intuition that the variation will be larger due to increasing number of possible states (arrangements).

  20. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  1. A chance-constrained stochastic approach to intermodal container routing problems.

    Science.gov (United States)

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  2. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    Science.gov (United States)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  3. Variations on minimal gauge-mediated supersymmetry breaking

    International Nuclear Information System (INIS)

    Dine, M.; Nir, Y.; Shirman, Y.

    1997-01-01

    We study various modifications to the minimal models of gauge-mediated supersymmetry breaking. We argue that, under reasonable assumptions, the structure of the messenger sector is rather restricted. We investigate the effects of possible mixing between messenger and ordinary squark and slepton fields and, in particular, violation of universality. We show that acceptable values for the μ and B parameters can naturally arise from discrete, possibly horizontal, symmetries. We claim that in models where the supersymmetry-breaking parameters A and B vanish at the tree level, tanβ could be large without fine-tuning. We explain how the supersymmetric CP problem is solved in such models. copyright 1997 The American Physical Society

  4. Seismic random noise attenuation using shearlet and total generalized variation

    International Nuclear Information System (INIS)

    Kong, Dehui; Peng, Zhenming

    2015-01-01

    Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better. (paper)

  5. Seismic random noise attenuation using shearlet and total generalized variation

    Science.gov (United States)

    Kong, Dehui; Peng, Zhenming

    2015-12-01

    Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better.

  6. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

    Directory of Open Access Journals (Sweden)

    David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

    2018-03-01

    Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

  7. Towards Constraining Glacial Isostatic Adjustment in Greenland Using ICESat and GPS Observations

    DEFF Research Database (Denmark)

    Nielsen, Karina; Sørensen, Louise Sandberg; Khan, Shfaqat Abbas

    2014-01-01

    Constraining glacial isostatic adjustment (GIA) i.e. the Earth’s viscoelastic response to past ice changes, is an important task, because GIA is a significant correction in gravity-based ice sheet mass balance estimates. Here, we investigate how temporal variations in the observed and modeled cru...

  8. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  9. Opinions among Danish knee surgeons about indications to perform total knee replacement showed considerable variation

    DEFF Research Database (Denmark)

    Troelsen, Anders; Schrøder, Henrik; Husted, Henrik

    2012-01-01

    During the past decade, the incidence of primary total knee replacement (TKA) surgery in Denmark has approximately doubled. This increase could be due to weakened indications to perform TKA surgery. We aimed to investigate variation in opinions about indications to perform TKA among Danish knee...

  10. The thermodynamic approach to boron chemical vapour deposition based on a computer minimization of the total Gibbs free energy

    International Nuclear Information System (INIS)

    Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.

    1979-01-01

    A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)

  11. A penalty method for PDE-constrained optimization in inverse problems

    International Nuclear Information System (INIS)

    Leeuwen, T van; Herrmann, F J

    2016-01-01

    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate. (paper)

  12. Design and Modelling of Sustainable Bioethanol Supply Chain by Minimizing the Total Ecological Footprint in Life Cycle Perspective

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    manners in bioethanol systems, this study developed a model for designing the most sustainable bioethanol supply chain by minimizing the total ecological footprint under some prerequisite constraints including satisfying the goal of the stakeholders', the limitation of resources and energy, the capacity......The purpose of this paper is to develop a model for designing the most sustainable bioethanol supply chain. Taking into consideration of the possibility of multiple-feedstock, multiple transportation modes, multiple alternative technologies, multiple transport patterns and multiple waste disposal...

  13. Stringent tests of constrained Minimal Flavor Violation through {Delta}F=2 transitions

    Energy Technology Data Exchange (ETDEWEB)

    Buras, Andrzej J. [TUM-IAS, Garching (Germany); Girrbach, Jennifer [TUM, Physik Department, Garching (Germany)

    2013-09-15

    New Physics contributions to {Delta}F=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S{sub 0}(x{sub t}). With already very precise experimental values of {epsilon}{sub K}, {Delta}M{sub d}, {Delta}M{sub s} and precise values of the CP-asymmetry S{sub {psi}K{sub S}} and of B{sub K} entering the evaluation of {epsilon}{sub K}, the future of CMFV in the {Delta}F = 2 sector depends crucially on the values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, {gamma}, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}). The ratio {xi} of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with {Delta}M{sub s} / {Delta}M{sub d} and S{sub {psi}K{sub S}} to determine the range of the angle {gamma} in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke {epsilon}{sub K} vertical stroke and {Delta}M{sub d} allows to determine the favorite CMFV values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}) as functions of S(v) and {gamma}. The vertical stroke V{sub cb} vertical stroke {sup 4} dependence of {epsilon}{sub K} allows to determine vertical stroke V{sub cb} vertical stroke for a given S(v) and {gamma} with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V{sub ub} vertical stroke, vertical stroke V{sub td} vertical stroke and vertical stroke V{sub ts} vertical stroke that are automatically determined as functions of S(v) and {gamma}. We derive correlations

  14. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  15. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  16. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  17. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

    Directory of Open Access Journals (Sweden)

    Stephan Zentner

    2014-02-01

    Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

  18. Generalized monotonicity from global minimization in fourth-order ODEs

    NARCIS (Netherlands)

    M.A. Peletier (Mark)

    2000-01-01

    textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.

  19. Blind image fusion for hyperspectral imaging with the directional total variation

    Science.gov (United States)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  20. Pengaruh Pelapis Bionanokomposit terhadap Mutu Mangga Terolah Minimal

    Directory of Open Access Journals (Sweden)

    Ata Aditya Wardana

    2017-04-01

    Full Text Available Abstract Minimally-processed mango is a perishable product due to high respiration and transpiration and microbial decay. Edible coating is one of the alternative methods to maintain the quality of minimally - processed mango. The objective of this study was to evaluate the effects of bionanocomposite edible coating from tapioca and ZnO nanoparticles (NP-ZnO on quality of minimally - processed mango cv. Arumanis, stored for 12 days at 8°C. The combination of tapioca and NP-ZnO (0, 1, 2% by weight of tapioca were used to coat minimally processed mango. The result showed that application of bionanocomposite edible coatings were able to maintain the quality of minimally-processed mango during the storage periods. The bionanocomposite from tapioca + NP-ZnO (2% by weight of tapioca was the most effective in reducing weight loss, firmness, browning index, total acidity, total soluble solids ,respiration, and microbial counts. Thus, the use of bionanocomposite edible coating might provide an alternative method to maintain storage quality of minimally-processed mango. Abstrak Mangga terolah minimal merupakan produk yang cepat mengalami kerusakan dikarenakan respirasi yang cepat, transpirasi dan kerusakan oleh mikroba. Edible coating merupakan salah satu alternatif metode untuk mempertahankan mutu mangga terolah minimal. Tujuan dari penelitian ini adalah untuk mengevaluasi pengaruh pelapis bionanokomposit dari tapioka dan nanopartikel ZnO (NP-ZnO terhadap mutu mangga terolah minimal cv. Arumanis yang disimpan selama 12 hari pada suhu 8oC. Kombinasi dari tapioka dan NP-ZnO (0, 1, 2% b/b tapioka digunakan untuk melapisi mangga terolah minimal. Hasil menunjukkan bahwa pelapisan bionanokomposit mampu mempertahankan mutu mangga terolah minimal selama penyimpanan. Bionanokomposit dari tapioka + NP-ZnO (2% b/b tapioka paling efektif dalam menghambat penurunan susut bobot, kekerasan, indeks pencoklatan, total asam, total padatan terlarut, respirasi dan total

  1. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  2. Tree dimension in verification of constrained Horn clauses

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick; Ganty, Pierre

    2018-01-01

    In this paper, we show how the notion of tree dimension can be used in the verification of constrained Horn clauses (CHCs). The dimension of a tree is a numerical measure of its branching complexity and the concept here applies to Horn clause derivation trees. Derivation trees of dimension zero c...... algorithms using these constructions to decompose a CHC verification problem. One variation of this decomposition considers derivations of successively increasing dimension. The paper includes descriptions of implementations and experimental results....

  3. Providing intraosseous anesthesia with minimal invasion.

    Science.gov (United States)

    Giffin, K M

    1994-08-01

    A new variation of intraosseous anesthesia--crestal anesthesia--that is rapid, site-specific and minimally invasive is presented. The technique uses alveolar crest nutrient canals for anesthetic delivery without penetrating either bone or periodontal ligament.

  4. Coupling failure between stem and femoral component in a constrained revision total knee arthroplasty.

    LENUS (Irish Health Repository)

    Butt, Ahsan Javed

    2013-02-01

    Knee revision using constrained implants is associated with greater stresses on the implant and interface surfaces. The present report describes a case of failure of the screw coupling between the stem and the femoral component. The cause of the failure is surmised with outline of the treatment in this case with extensive femoral bone loss. Revision implant stability was augmented with the use of a cemented femoral stem, screw fixation and the metaphyseal sleeve of an S-ROM modular hip system (DePuy international Ltd).

  5. Closed-form expressions for flip angle variation that maximize total signal in T1-weighted rapid gradient echo MRI.

    Science.gov (United States)

    Drobnitzky, Matthias; Klose, Uwe

    2017-03-01

    Magnetization-prepared rapid gradient-echo (MPRAGE) sequences are commonly employed for T1-weighted structural brain imaging. Following a contrast preparation radiofrequency (RF) pulse, the data acquisition proceeds under nonequilibrium conditions of the relaxing longitudinal magnetization. Variation of the flip angle can be used to maximize total available signal. Simulated annealing or greedy algorithms have so far been published to numerically solve this problem, with signal-to-noise ratios optimized for clinical imaging scenarios by adhering to a predefined shape of the signal evolution. We propose an unconstrained optimization of the MPRAGE experiment that employs techniques from resource allocation theory. A new dynamic programming solution is introduced that yields closed-form expressions for optimal flip angle variation. Flip angle series are proposed that maximize total transverse magnetization (Mxy) for a range of physiologic T1 values. A 3D MPRAGE sequence is modified to allow for a controlled variation of the excitation angle. Experiments employing a T1 contrast phantom are performed at 3T. 1D acquisitions without phase encoding permit measurement of the temporal development of Mxy. Image mean signal and standard deviation for reference flip angle trains are compared in 2D measurements. Signal profiles at sharp phantom edges are acquired to access image blurring related to nonuniform Mxy development. A novel closed-form expression for flip angle variation is found that constitutes the optimal policy to reach maximum total signal. It numerically equals previously published results of other authors when evaluated under their simplifying assumptions. Longitudinal magnetization (Mz) is exhaustively used without causing abrupt changes in the measured MR signal, which is a prerequisite for artifact free images. Phantom experiments at 3T verify the expected benefit for total accumulated k-space signal when compared with published flip angle series. Describing

  6. Variation in total sugars and reductive sugars in the moss Pleurozium schreberi (hylocomiaceae) under water deficit conditions

    International Nuclear Information System (INIS)

    Montenegro Ruiz, Luis Carlos; Melgarejo Munoz, Luz Marina.

    2012-01-01

    The structural simplicity of the bryophytes exposed them easily to water stress, forcing them to have physiological and biochemical mechanisms that enable them to survive. This study evaluated the variation of total soluble sugars and reducing sugars in relation to relative water content, in Pleurozium schreberi when faced with low water content in the Paramo de Chingaza (Colombia) and under simulated conditions of water deficit in the laboratory. we found that total sugars increase when the plant is dehydrated and returned to their normal content when re-hydrated moss, this could be interpreted as a possible mechanism of osmotic adjustment and osmoprotection of the cell content and cellular structure. Reducing sugars showed no significant variation, showing that monosaccharides do not have a protective role during dehydration.

  7. Minimization of power consumption during charging of superconducting accelerating cavities

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, Anirban Krishna, E-mail: anirban.bhattacharyya@physics.uu.se; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy

    2015-11-21

    The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.

  8. Minimization of power consumption during charging of superconducting accelerating cavities

    International Nuclear Information System (INIS)

    Bhattacharyya, Anirban Krishna; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy

    2015-01-01

    The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.

  9. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  10. Constraining the dark side with observations

    International Nuclear Information System (INIS)

    Diez-Tejedor, Alberto

    2007-01-01

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations

  11. Constraining the dark side with observations

    Energy Technology Data Exchange (ETDEWEB)

    Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)

    2007-05-15

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.

  12. The Effect on Long-Term Survivorship of Surgeon Preference for Posterior-Stabilized or Minimally Stabilized Total Knee Replacement: An Analysis of 63,416 Prostheses from the Australian Orthopaedic Association National Joint Replacement Registry.

    Science.gov (United States)

    Vertullo, Christopher J; Lewis, Peter L; Lorimer, Michelle; Graves, Stephen E

    2017-07-05

    Controversy still exists as to the optimum management of the posterior cruciate ligament (PCL) in total knee arthroplasty. Surgeons can choose to kinematically substitute the PCL with a posterior-stabilized total knee replacement or alternatively to utilize a cruciate-retaining, also known as minimally stabilized, total knee replacement. Proponents of posterior-stabilized total knee replacement propose that the reported lower survivorship in registries when directly compared with minimally stabilized total knee replacement is due to confounders such as selection bias because of the preferential usage of posterior-stabilized total knee replacement in more complex or severe cases. In this study, we aimed to eliminate these possible confounders by performing an instrumental variable analysis based on surgeon preference to choose either posterior-stabilized or minimally stabilized total knee replacement, rather than the actual prosthesis received. Cumulative percent revision, hazard ratio (HR), and revision diagnosis data were obtained from the Australian Orthopaedic Association National Joint Replacement Registry from September 1, 1999, to December 31, 2014, for 2 cohorts of patients, those treated by high-volume surgeons who preferred minimally stabilized replacements and those treated by high-volume surgeons who preferred posterior-stabilized replacements. All patients had a diagnosis of osteoarthritis and underwent fixed-bearing total knee replacement with patellar resurfacing. At 13 years, the cumulative percent revision was 5.0% (95% confidence interval [CI], 4.0% to 6.2%) for the surgeons who preferred the minimally stabilized replacements compared with 6.0% (95% CI, 4.2% to 8.5%) for the surgeons who preferred the posterior-stabilized replacements. The revision risk for the surgeons who preferred posterior-stabilized replacements was significantly higher for all causes (HR = 1.45 [95% CI, 1.30 to 1.63]; p total knee replacement compared with the patients of

  13. Exact and Heuristic Solutions to Minimize Total Waiting Time in the Blood Products Distribution Problem

    Directory of Open Access Journals (Sweden)

    Amir Salehipour

    2012-01-01

    Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.

  14. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  15. Constrained core solutions for totally positive games with ordered players

    NARCIS (Netherlands)

    van den Brink, J.R.; van der Laan, G.; Vasil'ev, V.

    2014-01-01

    In many applications of cooperative game theory to economic allocation problems, such as river-, polluted river- and sequencing games, the game is totally positive (i.e., all dividends are nonnegative), and there is some ordering on the set of the players. A totally positive game has a nonempty

  16. Fast Lagrangian relaxation for constrained generation scheduling in a centralized electricity market

    International Nuclear Information System (INIS)

    Ongsakul, Weerakorn; Petcharaks, Nit

    2008-01-01

    This paper proposes a fast Lagrangian relaxation (FLR) for constrained generation scheduling (CGS) problem in a centralized electricity market. FLR minimizes the consumer payment rather than the total supply cost subject to the power balance, spinning reserve, transmission line, and generator operating constraints. FLR algorithm is improved by new initialization of Lagrangian multipliers and adaptive adjustment of Lagrangian multipliers. The adaptive subgradient method using high quality initial feasible multipliers requires much less number of iterations to converge, leading to a faster computational time. If congestion exists, the alleviating congestion index is proposed for congestion management. Finally, the unit decommitment is performed to prevent excessive spinning reserve. The FLR for CGS is tested on the 4 unit and the IEEE 24 bus reliability test systems. The proposed uniform electricity price results in a lower consumer payment than system marginal price based on uniformly fixed cost amortized allocation, non-uniform price, and electricity price incorporating side payment, leading to a lower electricity price. In addition, observations on objective functions, pricing scheme comparison and interpretation of Lagrangian multipliers are provided. (author)

  17. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  18. Minimal clinically important improvement (MCII) and patient-acceptable symptom state (PASS) in total hip arthroplasty (THA) patients 1 year postoperatively

    DEFF Research Database (Denmark)

    Paulsen, Aksel; Roos, Ewa M.; Pedersen, Alma Becic

    2014-01-01

    -55% improvement from mean baseline PRO score and PASSs corresponded to absolute follow-up scores of 57-91% of the maximum score in THA patients 1 year after surgery. Interpretation - This study improves the interpretability of PRO scores. The different estimation approaches presented may serve as a guide......Background and purpose - The increased use of patient-reported outcomes (PROs) in orthopedics requires data on estimated minimal clinically important improvements (MCIIs) and patient-acceptable symptom states (PASSs). We wanted to find cut-points corresponding to minimal clinically important PRO...... change score and the acceptable postoperative PRO score, by estimating MCII and PASS 1 year after total hip arthroplasty (THA) for the Hip Dysfunction and Osteoarthritis Outcome Score (HOOS) and the EQ-5D. Patients and methods - THA patients from 16 different departments received 2 PROs and additional...

  19. Data-constrained reionization and its effects on cosmological parameters

    International Nuclear Information System (INIS)

    Pandolfi, S.; Ferrara, A.; Choudhury, T. Roy; Mitra, S.; Melchiorri, A.

    2011-01-01

    We perform an analysis of the recent WMAP7 data considering physically motivated and viable reionization scenarios with the aim of assessing their effects on cosmological parameter determinations. The main novelties are: (i) the combination of cosmic microwave background data with astrophysical results from quasar absorption line experiments; (ii) the joint variation of both the cosmological and astrophysical [governing the evolution of the free electron fraction x e (z)] parameters. Including a realistic, data-constrained reionization history in the analysis induces appreciable changes in the cosmological parameter values deduced through a standard WMAP7 analysis. Particularly noteworthy are the variations in Ω b h 2 =0.02258 -0.00056 +0.00057 [WMAP7 (Sudden)] vs Ω b h 2 =0.02183±0.00054[WMAP7+ASTRO (CF)] and the new constraints for the scalar spectral index, for which WMAP7+ASTRO (CF) excludes the Harrison-Zel'dovich value n s =1 at >3σ. Finally, the electron-scattering optical depth value is considerably decreased with respect to the standard WMAP7, i.e. τ e =0.080±0.012. We conclude that the inclusion of astrophysical data sets, allowing to robustly constrain the reionization history, in the extraction procedure of cosmological parameters leads to relatively important differences in the final determination of their values.

  20. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  1. Constrained reaction volume approach for studying chemical kinetics behind reflected shock waves

    KAUST Repository

    Hanson, Ronald K.

    2013-09-01

    We report a constrained-reaction-volume strategy for conducting kinetics experiments behind reflected shock waves, achieved in the present work by staged filling in a shock tube. Using hydrogen-oxygen ignition experiments as an example, we demonstrate that this strategy eliminates the possibility of non-localized (remote) ignition in shock tubes. Furthermore, we show that this same strategy can also effectively eliminate or minimize pressure changes due to combustion heat release, thereby enabling quantitative modeling of the kinetics throughout the combustion event using a simple assumption of specified pressure and enthalpy. We measure temperature and OH radical time-histories during ethylene-oxygen combustion behind reflected shock waves in a constrained reaction volume and verify that the results can be accurately modeled using a detailed mechanism and a specified pressure and enthalpy constraint. © 2013 The Combustion Institute.

  2. Are undesirable contact kinematics minimized after kinematically aligned total knee arthroplasty? An intersurgeon analysis of consecutive patients.

    Science.gov (United States)

    Howell, Stephen M; Hodapp, Esther E; Vernace, Joseph V; Hull, Maury L; Meade, Thomas D

    2013-10-01

    Tibiofemoral contact kinematics or knee implant motions have a direct influence on patient function and implant longevity and should be evaluated for any new alignment technique such as kinematically aligned total knee arthroplasty (TKA). Edge loading of the tibial liner and external rotation (reverse of normal) and adduction of the tibial component on the femoral component are undesirable contact kinematics that should be minimized. Accordingly, this study determined whether the overall prevalence of undesirable contact kinematics during standing, mid kneeling near 90 degrees and full kneeling with kinematically aligned TKA are minimal and not different between groups of consecutive patients treated by different surgeons. Three surgeons were asked to perform cemented, kinematically aligned TKA with patient-specific guides in a consecutive series of patients with their preferred cruciate-retaining (CR) implant. In vivo tibiofemoral contact positions were obtained using a 3- to 2-dimensional image registration technique in 69 subjects (Vanguard CR-TKA N = 22, and Triathlon CR-TKA N = 47). Anterior or posterior edge loading of the tibial liner was not observed. The overall prevalence of external rotation of the tibial component on the femoral component of 6 % was low and not different between surgeons (n.s.). The overall prevalence of adduction of the tibial component on the femoral component of 4 % was low and not different between surgeons (n.s.). Kinematically aligned TKA minimized the undesirable contact kinematics of edge loading of the tibial liner, and external rotation and adduction of the tibial component on the femoral component during standing and kneeling, which suggests an optimistic prognosis for durable long-term function. III.

  3. On the isoperimetric rigidity of extrinsic minimal balls

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We consider an m-dimensional minimal submanifold P and a metric R-sphere in the Euclidean space R-n. If the sphere has its center p on P, then it will cut out a well defined connected component of P which contains this center point. We call this connected component an extrinsic minimal R-ball of P....... The quotient of the volume of the extrinsic ball and the volume of its boundary is not larger than the corresponding quotient obtained in the space form standard situation, where the minimal submanifold is the totally geodesic linear subspace R-m. Here we show that if the minimal submanifold has dimension...... larger than 3, if P is not too curved along the boundary of an extrinsic minimal R-ball, and if the inequality alluded to above is an equality for the extrinsic minimal ball, then the minimal submanifold is totally geodesic....

  4. Existence of evolutionary variational solutions via the calculus of variations

    Science.gov (United States)

    Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo

    In this paper we introduce a purely variational approach to time dependent problems, yielding the existence of global parabolic minimizers, that is ∫0T ∫Ω [uṡ∂tφ+f(x,Du)] dx dt⩽∫0T ∫Ω f(x,Du+Dφ) dx dt, whenever T>0 and φ∈C0∞(Ω×(0,T),RN). For the integrand f:Ω×R→[0,∞] we merely assume convexity with respect to the gradient variable and coercivity. These evolutionary variational solutions are obtained as limits of maps depending on space and time minimizing certain convex variational functionals. In the simplest situation, with some growth conditions on f, the method provides the existence of global weak solutions to Cauchy-Dirichlet problems of parabolic systems of the type ∂tu-divDξf(x,Du)=0 in Ω×(0,∞).

  5. Total Knee Replacement in A Resource Constrained Environment: A ...

    African Journals Online (AJOL)

    2017-03-06

    Mar 6, 2017 ... care patronage, and the recent trends of religious/faith healings. • Beliefs ... Prevalence and Pattern of Knee Osteoarthritis in a North Eastern. Nigerian Rural ... epidemiology of total knee replacement in South Korea: national.

  6. 4D segmentation of brain MR images with constrained cortical thickness variation.

    Directory of Open Access Journals (Sweden)

    Li Wang

    Full Text Available Segmentation of brain MR images plays an important role in longitudinal investigation of developmental, aging, disease progression changes in the cerebral cortex. However, most existing brain segmentation methods consider multiple time-point images individually and thus cannot achieve longitudinal consistency. For example, cortical thickness measured from the segmented image will contain unnecessary temporal variations, which will affect the time related change pattern and eventually reduce the statistical power of analysis. In this paper, we propose a 4D segmentation framework for the adult brain MR images with the constraint of cortical thickness variations. Specifically, we utilize local intensity information to address the intensity inhomogeneity, spatial cortical thickness constraint to maintain the cortical thickness being within a reasonable range, and temporal cortical thickness variation constraint in neighboring time-points to suppress the artificial variations. The proposed method has been tested on BLSA dataset and ADNI dataset with promising results. Both qualitative and quantitative experimental results demonstrate the advantage of the proposed method, in comparison to other state-of-the-art 4D segmentation methods.

  7. Interspecific variation of total seed protein in wild rice germplasm using SDS-Page

    International Nuclear Information System (INIS)

    Shah, S.M.A.; Hidayat-ur-Rahman; Abbasi, F.M.; Ashiq, M.; Rabbani, A.M.; Khan, I.A.; Shinwari, Z.K.; Shah, Z.

    2011-01-01

    Variation in seed protein of 14 wild rice species (Oryza spp.) along with cultivated rice species (O. sativa) was studied using sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) to assess genetic diversity in the rice germplasm. SDS bands were scored as present (1) or absent (0) for protein sample of each genotype. On the basis of cluster analysis, four clusters were identified at a similarity level of 0.85. O. nivara, O. rufipogon and O. sativa with AA genomes constituted the first cluster. The second cluster comprised O. punctata of BB genome and wild rice species of CC genome i.e., O. rhizomatis and O. officinalis. However, it also contained O. barthii and O. glumaepatula of AA genome. O. australiensis with EE genome, and O. latifolia, O. alta and O. grandiglumis having CCDD genomes comprised the third cluster. The fourth cluster consisted of wild rice species, O. brachyantha with EE genome along with two other wild rice species, O. longistaminata and O. meridionalis of AA genome. Overall, on the basis of total seed protein, the grouping pattern of rice genotypes was mostly compatible with their genome status. The results of the present work depicted considerable interspecific genetic variation in the investigated germplasm for total seed protein. Moreover, the results obtained in this study also suggest that analysis of seed protein can also provide a better understanding of genetic affinity of the germplasm. (author)

  8. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    Science.gov (United States)

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  9. The effect of agency budgets on minimizing greenhouse gas emissions from road rehabilitation policies

    International Nuclear Information System (INIS)

    Reger, Darren; Madanat, Samer; Horvath, Arpad

    2015-01-01

    Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets. (letter)

  10. Low-Voltage Consumption Coordination for Loss Minimization and Voltage Control

    DEFF Research Database (Denmark)

    Juelsgaard, Morten; Sloth, Christoffer; Wisniewski, Rafal

    2014-01-01

    This work presents a strategy for minimizing active power losses in low-voltage grids, by coordinating the consumption of electric vehicles and power generation from solar panels. We show that minimizing losses, also reduces voltage variations, and illustrate how this may be employed for increasing...

  11. Mantle viscosity structure constrained by joint inversions of seismic velocities and density

    Science.gov (United States)

    Rudolph, M. L.; Moulik, P.; Lekic, V.

    2017-12-01

    The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.

  12. Lidar Penetration Depth Observations for Constraining Cloud Longwave Feedbacks

    Science.gov (United States)

    Vaillant de Guelis, T.; Chepfer, H.; Noel, V.; Guzman, R.; Winker, D. M.; Kay, J. E.; Bonazzola, M.

    2017-12-01

    Satellite-borne active remote sensing Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations [CALIPSO; Winker et al., 2010] and CloudSat [Stephens et al., 2002] provide direct measurements of the cloud vertical distribution, with a very high vertical resolution. The penetration depth of the laser of the lidar Z_Opaque is directly linked to the LongWave (LW) Cloud Radiative Effect (CRE) at Top Of Atmosphere (TOA) [Vaillant de Guélis et al., in review]. In addition, this measurement is extremely stable in time making it an excellent observational candidate to verify and constrain the cloud LW feedback mechanism [Chepfer et al., 2014]. In this work, we present a method to decompose the variations of the LW CRE at TOA using cloud properties observed by lidar [GOCCP v3.0; Guzman et al., 2017]. We decompose these variations into contributions due to changes in five cloud properties: opaque cloud cover, opaque cloud altitude, thin cloud cover, thin cloud altitude, and thin cloud emissivity [Vaillant de Guélis et al., in review]. We apply this method, in the real world, to the CRE variations of CALIPSO 2008-2015 record, and, in climate model, to LMDZ6 and CESM simulations of the CRE variations of 2008-2015 period and of the CRE difference between a warm climate and the current climate. In climate model simulations, the same cloud properties as those observed by CALIOP are extracted from the CFMIP Observation Simulator Package (COSP) [Bodas-Salcedo et al., 2011] lidar simulator [Chepfer et al., 2008], which mimics the observations that would be performed by the lidar on board CALIPSO satellite. This method, when applied on multi-model simulations of current and future climate, could reveal the altitude of cloud opacity level observed by lidar as a strong constrain for cloud LW feedback, since the altitude feedback mechanism is physically explainable and the altitude of cloud opacity accurately observed by lidar.

  13. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    Science.gov (United States)

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  14. Unifying principles of irreversibility minimization for efficiency maximization in steady-flow chemically-reactive engines

    International Nuclear Information System (INIS)

    Ramakrishnan, Sankaran; Edwards, Christopher F.

    2014-01-01

    Systems research has led to the conception and development of various steady-flow, chemically-reactive, engine cycles for stationary power generation and propulsion. However, the question that remains unanswered is: What is the maximum-efficiency steady-flow chemically-reactive engine architecture permitted by physics? On the one hand the search for higher-efficiency cycles continues, often involving newer processes and devices (fuel cells, carbon separation, etc.); on the other hand the design parameters for existing cycles are continually optimized in response to improvements in device engineering. In this paper we establish that any variation in engine architecture—parametric change or process-sequence change—contributes to an efficiency increase via one of only two possible ways to minimize total irreversibility. These two principles help us unify our understanding from a large number of parametric analyses and cycle-optimization studies for any steady-flow chemically-reactive engine, and set a framework to systematically identify maximum-efficiency engine architectures. - Highlights: • A unified thermodynamic model to study chemically-reactive engine architectures is developed. • All parametric analyses of efficiency are unified by two irreversibility-minimization principles. • Variations in internal energy transfers yield a net work increase that is greater than engine irreversibility reduced. • Variations in external energy transfers yield a net work increase that is lesser than engine irreversibility reduced

  15. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  16. Variational principle for nonlinear gyrokinetic Vlasov--Maxwell equations

    International Nuclear Information System (INIS)

    Brizard, Alain J.

    2000-01-01

    A new variational principle for the nonlinear gyrokinetic Vlasov--Maxwell equations is presented. This Eulerian variational principle uses constrained variations for the gyrocenter Vlasov distribution in eight-dimensional extended phase space and turns out to be simpler than the Lagrangian variational principle recently presented by H. Sugama [Phys. Plasmas 7, 466 (2000)]. A local energy conservation law is then derived explicitly by the Noether method. In future work, this new variational principle will be used to derive self-consistent, nonlinear, low-frequency Vlasov--Maxwell bounce-gyrokinetic equations, in which the fast gyromotion and bounce-motion time scales have been eliminated

  17. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  18. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  19. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  20. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  1. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  2. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  3. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-01-01

    Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  4. Total knee replacement in a resource constrained environment: A ...

    African Journals Online (AJOL)

    Introduction: Total knee replacement surgery is relatively new in Nigeria and available in few centres only. It has been evolving at a slow pace because of the lack of facilities, structures and adequate surgical expertise alongside patient ignorance and poverty. Objective: The aim of this article is to review the cases done in a ...

  5. Inversion of Love wave phase velocity using smoothness-constrained least-squares technique; Heikatsuka seiyakutsuki saisho jijoho ni yoru love ha iso sokudo no inversion

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)

    1996-10-01

    Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.

  6. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    International Nuclear Information System (INIS)

    Qi Pei-Han; Zheng Shi-Lian; Yang Xiao-Niu; Zhao Zhi-Jin

    2016-01-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. (paper)

  7. Cement-in-cement acetabular revision with a constrained tripolar component.

    Science.gov (United States)

    Leonidou, Andreas; Pagkalos, Joseph; Luscombe, Jonathan

    2012-02-17

    Dislocation of a total hip replacement (THR) is common following total hip arthroplasty (THA). When nonoperative management fails to maintain reduction, revision surgery is considered. The use of constrained acetabular liners has been extensively described. Complete removal of the old cement mantle during revision THA can be challenging and is associated with significant complications. Cement-in-cement revision is an established technique. However, the available clinical and experimental studies focus on femoral stem revision. The purpose of this study was to present a case of cement-in-cement acetabular revision with a constrained component for recurrent dislocations and to investigate the current best evidence for this technique. This article describes the case of a 74-year-old woman who underwent revision of a Charnley THR for recurrent low-energy dislocations. A tripolar constrained acetabular component was cemented over the primary cement mantle following removal of the original liner by reaming, roughening the surface, and thoroughly irrigating and drying the primary cement. Clinical and radiological results were good, with the Oxford Hip Score improving from 11 preoperatively to 24 at 6 months postoperatively. The good short-term results of this case and the current clinical and biomechanical data encourage the use of the cement-in-cement technique for acetabular revision. Careful irrigation, drying, and roughening of the primary surface are necessary. Copyright 2012, SLACK Incorporated.

  8. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  9. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  10. Constrained supersymmetric flipped SU(5) GUT phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [CERN, TH Division, PH Department, Geneva 23 (Switzerland); King' s College London, Theoretical Physics and Cosmology Group, Department of Physics, London (United Kingdom); Mustafayev, Azar [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Olive, Keith A. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Stanford University, Department of Physics and SLAC, Palo Alto, CA (United States)

    2011-07-15

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M{sub in}, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tau}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2},m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M{sub in}, as we illustrate for several cases with tan {beta}=10 and 55. However, these features do not necessarily disappear at large M{sub in}, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  11. Constrained supersymmetric flipped SU(5) GUT phenomenology

    International Nuclear Information System (INIS)

    Ellis, John; Mustafayev, Azar; Olive, Keith A.

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M in , above the GUT scale, M GUT . We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino χ and the lighter stau τ 1 is sensitive to M in , as is the relationship between m χ and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m 1/2 ,m 0 ) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M in , as we illustrate for several cases with tan β=10 and 55. However, these features do not necessarily disappear at large M in , unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  12. A projective constrained variational principle for a classical particle with spin

    International Nuclear Information System (INIS)

    Amorim, R.

    1983-01-01

    A geometric approach for variational principles with constraints is applied to obtain the equations of motion of a classical charged point particle with magnetic moment interacting with an external eletromagnetic field. (Author) [pt

  13. Elemental Spatiotemporal Variations of Total Suspended Particles in Jeddah City

    Directory of Open Access Journals (Sweden)

    Mohammad W. Kadi

    2014-01-01

    Full Text Available Elements associated with total suspended particulate matter (TSP in Jeddah city were determined. Using high-volume samplers, TSP samples were simultaneously collected over a one-year period from seven sampling sites. Samples were analyzed for Al, Ba, Ca, Cu, Mg, Fe, Mn, Zn, Ti, V, Cr, Co, Ni, As, and Sr. Results revealed great dependence of element contents on spatial and temporal variations. Two sites characterized by busy roads, workshops, heavy population, and heavy trucking have high levels of all measured elements. Concentrations of most elements at the two sites exhibit strong spatial gradients and concentrations of elements at these sites are higher than other locations. The highest concentrations of elements were observed during June–August because of dust storms, significant increase in energy consumption, and active surface winds. Enrichment factors of elements at the high-level sites have values in the range >10~60 while for Cu and Zn the enrichment factors are much higher (~0–>700 indicating that greater percentage of TSP composition for these three elements in air comes from anthropogenic activities.

  14. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  15. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  16. [Abdominothoracic esophageal resection according to Ivor Lewis with intrathoracic anastomosis : standardized totally minimally invasive technique].

    Science.gov (United States)

    Runkel, N; Walz, M; Ketelhut, M

    2015-05-01

    The clinical and scientific interest in minimally invasive techniques for esophagectomy (MIE) are increasing; however, the intrathoracic esophagogastric anastomosis remains a surgical challenge and lacks standardization. Surgeons either transpose the anastomosis to the cervical region or perform hybrid thoracotomy for stapler access. This article reports technical details and early experiences with a completely laparoscopic-thoracoscopic approach for Ivor Lewis esophagectomy without additional thoracotomy. The extent of radical dissection follows clinical guidelines. Laparoscopy is performed with the patient in a beach chair position and thoracoscopy in a left lateral decubitus position using single lung ventilation. The anvil of the circular stapler is placed transorally into the esophageal stump. The specimen and gastric conduit are exteriorized through a subcostal rectus muscle split incision. The stapler body is placed into the gastric conduit and both are advanced through the abdominal mini-incision transhiatally into the right thoracic cavity, where the anastomosis is constructed. Data were collected prospectively and analyzed retrospectively. A total of 23 non-selected consecutive patients (mean age 69 years, range 46-80 years) with adenocarcinoma (n = 19) or squamous cell carcinoma (n = 4) were surgically treated between June 2010 and July 2013. Neoadjuvant therapy was performed in 15 patients resulting in 10 partial and 4 complete remissions. There were no technical complications and no conversions. Mean operative time was 305 min (range 220-441 min). The median lymph node count was 16 (range 4-42). An R0 resection was achieved in 91 % of patients and 3 anastomotic leaks occurred which were successfully managed endoscopically. There were no postoperative deaths. The intrathoracic esophagogastric anastomosis during minimally invasive Ivor Lewis esophagectomy can be constructed in a standardized fashion without an additional thoracotomy

  17. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    Science.gov (United States)

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  18. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  19. Variational and quasi-variational inequalities in mechanics

    CERN Document Server

    Kravchuk, Alexander S

    2007-01-01

    The essential aim of the present book is to consider a wide set of problems arising in the mathematical modelling of mechanical systems under unilateral constraints. In these investigations elastic and non-elastic deformations, friction and adhesion phenomena are taken into account. All the necessary mathematical tools are given: local boundary value problem formulations, construction of variational equations and inequalities, and the transition to minimization problems, existence and uniqueness theorems, and variational transformations (Friedrichs and Young-Fenchel-Moreau) to dual and saddle-point search problems. Important new results concern contact problems with friction. The Coulomb friction law and some others are considered, in which relative sliding velocities appear. The corresponding quasi-variational inequality is constructed, as well as the appropriate iterative method for its solution. Outlines of the variational approach to non-stationary and dissipative systems and to the construction of the go...

  20. An unusual mode of failure of a tripolar constrained acetabular liner: a case report.

    LENUS (Irish Health Repository)

    Banks, Louisa N

    2012-02-01

    Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient\\'s weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.

  1. An unusual mode of failure of a tripolar constrained acetabular liner: a case report.

    Science.gov (United States)

    Banks, Louisa N; McElwain, John P

    2010-04-01

    Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient's weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.

  2. Applications of a constrained mechanics methodology in economics

    International Nuclear Information System (INIS)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  3. Applications of a constrained mechanics methodology in economics

    Science.gov (United States)

    Janová, Jitka

    2011-11-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  4. Applications of a constrained mechanics methodology in economics

    Energy Technology Data Exchange (ETDEWEB)

    Janova, Jitka, E-mail: janova@mendelu.cz [Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemedelska 1, 613 00 Brno (Czech Republic)

    2011-11-15

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  5. Accelerating cross-validation with total variation and its application to super-resolution imaging.

    Directory of Open Access Journals (Sweden)

    Tomoyuki Obuchi

    Full Text Available We develop an approximation formula for the cross-validation error (CVE of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  6. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  7. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  8. Biological variation, reference change value (RCV) and minimal important difference (MID) of inspiratory muscle strength (PImax) in patients with stable chronic heart failure.

    Science.gov (United States)

    Täger, Tobias; Schell, Miriam; Cebola, Rita; Fröhlich, Hanna; Dösch, Andreas; Franke, Jennifer; Katus, Hugo A; Wians, Frank H; Frankenstein, Lutz

    2015-10-01

    Despite the widespread application of measurements of respiratory muscle force (PImax) in clinical trials there is no data on biological variation, reference change value (RCV), or the minimal important difference (MID) for PImax irrespective of the target cohort. We addressed this issue for patients with chronic stable heart failure. From the outpatients' clinic of the University of Heidelberg we retrospectively selected three groups of patients with stable systolic chronic heart failure (CHF). Each group had two measurements of PImax: 90 days apart in Group A (n = 25), 180 days apart in Group B (n = 93), and 365 days apart in Group C (n = 184). Stability was defined as (a) no change in NYHA class between visits and (b) absence of cardiac decompensation 3 months prior, during, and 3 months after measurements. For each group, we determined within-subject (CVI), between-subject (CVG), and total (CVT) coefficient of variation (CV), the index of individuality (II), RCV, reliability coefficient, and MID of PImax. CVT was 8.7, 7.5, and 6.9 % for groups A, B, and C, respectively. The II and RCV were 0.21, 0.20, 0.16 and 13.6, 11.6, 10.8 %, respectively. The reliability coefficient and MID were 0.83, 0.87, 0.88 and 1.44, 1.06, 1.12 kPa, respectively. Results were similar between age, gender, and aetiology subgroups. In patients with stable CHF, measurements of PImax are highly stable for intervals up to 1 year. The low values for II suggest that evaluation of change in PImax should be performed on an individual (per patient) basis. Individually significant change can be assumed beyond 14 % (RCV) or 1.12 kPa (MID).

  9. Variational methods and effective actions in string models

    International Nuclear Information System (INIS)

    Dereli, T.; Tucker, R.W.

    1987-01-01

    Effective actions motivated by zero-order and first-order actions are examined. Particular attention is devoted to a variational procedure that is consistent with the structure equations involving the Lorentz connection. Attention is drawn to subtleties that can arise in varying higher-order actions and an efficient procedure developed to handle these cases using the calculus of forms. The effect of constrained variations on the field equations is discussed. (author)

  10. Analysis of Power Network for Line Reactance Variation to Improve Total Transmission Capacity

    Directory of Open Access Journals (Sweden)

    Ikram Ullah

    2016-11-01

    Full Text Available The increasing growth in power demand and the penetration of renewable distributed generations in competitive electricity market demands large and flexible capacity from the transmission grid to reduce transmission bottlenecks. The bottlenecks cause transmission congestion, reliability problems, restrict competition, and limit the maximum dispatch of low cost generations in the network. The electricity system requires efficient utilization of the current transmission capability to improve the Available Transfer Capability (ATC. To improve the ATC, power flow among the lines can be managed by using Flexible AC Transmission System (FACTS devices as power flow controllers, which alter the parameters of power lines. It is important to place FACTS devices on suitable lines to vary the reactance for improving Total Transmission Capacity (TTC of the network and provide flexibility in the power flow. In this paper a transmission network is analyzed based on line parameters variation to improve TTC of the interconnected system. Lines are selected for placing FACTS devices based on real power flow Performance Index (PI sensitivity factors. TTC is computed using the Repeated Power Flow (RPF method using the constraints of lines thermal limits, bus voltage limits and generator limits. The reactance of suitable lines, selected on the basis of PI sensitivity factors are changed to divert the power flow to other lines with enough transfer capacity available. The improvement of TTC using line reactance variation is demonstrated with three IEEE test systems with multi-area networks. The results show the variation of the selected lines’ reactance in improving TTC for all the test networks with defined contingency cases.

  11. Ontogenetic Variation of Individual and Total Capsaicinoids in Malagueta Peppers (Capsicum frutescens) during Fruit Maturation.

    Science.gov (United States)

    Fayos, Oreto; de Aguiar, Ana Carolina; Jiménez-Cantizano, Ana; Ferreiro-González, Marta; Garcés-Claver, Ana; Martínez, Julián; Mallor, Cristina; Ruiz-Rodríguez, Ana; Palma, Miguel; Barroso, Carmelo G; Barbero, Gerardo F

    2017-05-03

    The ontogenetic variation of total and individual capsaicinoids (nordihydrocapsaicin (n-DHC), capsaicin (C), dihydrocapsaicin (DHC), homocapsaicin (h-C) and homodihydrocapsaicin (h-DHC)) present in Malagueta pepper ( Capsicum frutescens ) during fruit ripening has been studied. Malagueta peppers were grown in a greenhouse under controlled temperature and humidity conditions. Capsaicinoids were extracted using ultrasound-assisted extraction (UAE) and the extracts were analyzed by ultra-performance liquid chromatography (UHPLC) with fluorescence detection. A significant increase in the total content of capsaicinoids was observed in the early days (between 12 and 33). Between day 33 and 40 there was a slight reduction in the total capsaicinoid content (3.3% decrease). C was the major capsaicinoid, followed by DHC, n-DHC, h-C and h-DHC. By considering the evolution of standardized values of the capsaicinoids it was verified that n-DHC, DHC and h-DHC (dihydrocapsaicin-like capsaicinoids) present a similar behavior pattern, while h-C and C (capsaicin-like capsaicinoids) show different evolution patterns.

  12. Total pancreatectomy with islet cell autotransplantation as the initial treatment for minimal-change chronic pancreatitis.

    Science.gov (United States)

    Wilson, Gregory C; Sutton, Jeffrey M; Smith, Milton T; Schmulewitz, Nathan; Salehi, Marzieh; Choe, Kyuran A; Brunner, John E; Abbott, Daniel E; Sussman, Jeffrey J; Ahmad, Syed A

    2015-03-01

    Patients with minimal-change chronic pancreatitis (MCCP) are traditionally managed medically with poor results. This study was conducted to review outcomes following total pancreatectomy with islet cell autotransplantation (TP/IAT) as the initial surgical procedure in the treatment of MCCP. All patients submitted to TP/IAT for MCCP were identified for inclusion in a single-centre observational study. A retrospective chart review was performed to identify pertinent preoperative, perioperative and postoperative data. A total of 84 patients with a mean age of 36.5 years (range: 15-60 years) underwent TP/IAT as the initial treatment for MCCP. The most common aetiology of chronic pancreatitis in this cohort was idiopathic (69.0%, n = 58), followed by aetiologies associated with genetic mutations (16.7%, n = 14), pancreatic divisum (9.5%, n = 8), and alcohol (4.8%, n = 4). The most common genetic mutations pertained to CFTR (n = 9), SPINK1 (n = 3) and PRSS1 (n = 2). Mean ± standard error of the mean preoperative narcotic requirements were 129.3 ± 18.7 morphine-equivalent milligrams (MEQ)/day. Overall, 58.3% (n = 49) of patients achieved narcotic independence and the remaining patients required 59.4 ± 10.6 MEQ/day (P < 0.05). Postoperative insulin independence was achieved by 36.9% (n = 31) of patients. The Short-Form 36-Item Health Survey administered postoperatively demonstrated improvement in all tested quality of life subscales. The present report represents one of the largest series demonstrating the benefits of TP/IAT in the subset of patients with MCCP. © 2014 International Hepato-Pancreato-Biliary Association.

  13. Technology applications for radioactive waste minimization

    International Nuclear Information System (INIS)

    Devgun, J.S.

    1994-01-01

    The nuclear power industry has achieved one of the most successful examples of waste minimization. The annual volume of low-level radioactive waste shipped for disposal per reactor has decreased to approximately one-fifth the volume about a decade ago. In addition, the curie content of the total waste shipped for disposal has decreased. This paper will discuss the regulatory drivers and economic factors for waste minimization and describe the application of technologies for achieving waste minimization for low-level radioactive waste with examples from the nuclear power industry

  14. Conjugated Polymers Via Direct Arylation Polymerization in Continuous Flow: Minimizing the Cost and Batch-to-Batch Variations for High-Throughput Energy Conversion

    DEFF Research Database (Denmark)

    Gobalasingham, Nemal S.; Carlé, Jon Eggert; Krebs, Frederik C

    2017-01-01

    of high-performance materials. To demonstrate the usefulness of the method, DArP-prepared PPDTBT via continuous flow synthesis is employed for the preparation of indium tin oxide (ITO)-free and flexible roll-coated solar cells to achieve a power conversion efficiency of 3.5% for 1 cm2 devices, which...... is comparable to the performance of PPDTBT polymerized through Stille cross coupling. These efforts demonstrate the distinct advantages of the continuous flow protocol with DArP avoiding use of toxic tin chemicals, reducing the associated costs of polymer upscaling, and minimizing batch-to-batch variations...

  15. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  16. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  17. ESTIMATING THE DEEP SOLAR MERIDIONAL CIRCULATION USING MAGNETIC OBSERVATIONS AND A DYNAMO MODEL: A VARIATIONAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Hung, Ching Pui; Jouve, Laurène; Brun, Allan Sacha [Laboratoire AIM Paris-Saclay, CEA/IRFU Université Paris-Diderot CNRS/INSU, F-91191 Gif-Sur-Yvette (France); Fournier, Alexandre [Institut de Physique du Globe de Paris, Sorbonne Paris Cité, Université Paris Diderot UMR 7154 CNRS, F-75005 Paris (France); Talagrand, Olivier [Laboratoire de météorologie dynamique, UMR 8539, Ecole Normale Supérieure, Paris Cedex 05 (France)

    2015-12-01

    We show how magnetic observations of the Sun can be used in conjunction with an axisymmetric flux-transport solar dynamo model in order to estimate the large-scale meridional circulation throughout the convection zone. Our innovative approach rests on variational data assimilation, whereby the distance between predictions and observations (measured by an objective function) is iteratively minimized by means of an optimization algorithm seeking the meridional flow that best accounts for the data. The minimization is performed using a quasi-Newton technique, which requires knowledge of the sensitivity of the objective function to the meridional flow. That sensitivity is efficiently computed via the integration of the adjoint flux-transport dynamo model. Closed-loop (also known as twin) experiments using synthetic data demonstrate the validity and accuracy of this technique for a variety of meridional flow configurations, ranging from unicellular and equatorially symmetric to multicellular and equatorially asymmetric. In this well-controlled synthetic context, we perform a systematic study of the behavior of our variational approach under different observational configurations by varying their spatial density, temporal density, and noise level, as well as the width of the assimilation window. We find that the method is remarkably robust, leading in most cases to a recovery of the true meridional flow to within better than 1%. These encouraging results are a first step toward using this technique to (i) better constrain the physical processes occurring inside the Sun and (ii) better predict solar activity on decadal timescales.

  18. ESTIMATING THE DEEP SOLAR MERIDIONAL CIRCULATION USING MAGNETIC OBSERVATIONS AND A DYNAMO MODEL: A VARIATIONAL APPROACH

    International Nuclear Information System (INIS)

    Hung, Ching Pui; Jouve, Laurène; Brun, Allan Sacha; Fournier, Alexandre; Talagrand, Olivier

    2015-01-01

    We show how magnetic observations of the Sun can be used in conjunction with an axisymmetric flux-transport solar dynamo model in order to estimate the large-scale meridional circulation throughout the convection zone. Our innovative approach rests on variational data assimilation, whereby the distance between predictions and observations (measured by an objective function) is iteratively minimized by means of an optimization algorithm seeking the meridional flow that best accounts for the data. The minimization is performed using a quasi-Newton technique, which requires knowledge of the sensitivity of the objective function to the meridional flow. That sensitivity is efficiently computed via the integration of the adjoint flux-transport dynamo model. Closed-loop (also known as twin) experiments using synthetic data demonstrate the validity and accuracy of this technique for a variety of meridional flow configurations, ranging from unicellular and equatorially symmetric to multicellular and equatorially asymmetric. In this well-controlled synthetic context, we perform a systematic study of the behavior of our variational approach under different observational configurations by varying their spatial density, temporal density, and noise level, as well as the width of the assimilation window. We find that the method is remarkably robust, leading in most cases to a recovery of the true meridional flow to within better than 1%. These encouraging results are a first step toward using this technique to (i) better constrain the physical processes occurring inside the Sun and (ii) better predict solar activity on decadal timescales

  19. Restoration ecology: two-sex dynamics and cost minimization.

    Directory of Open Access Journals (Sweden)

    Ferenc Molnár

    Full Text Available We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  20. Restoration ecology: two-sex dynamics and cost minimization.

    Science.gov (United States)

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  1. Three-dimensional total variation norm for SPECT reconstruction

    International Nuclear Information System (INIS)

    Persson, Mikael; Bone, Dianna; Elmqvist, H.

    2001-01-01

    The total variation (TV) norm has been described in literature as a method for reducing noise in two-dimensional (2D) images. At the same time, the TV-norm is very good at recovering edges in images, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularisation function in Bayesian reconstruction, implemented in an expectation maximisation (EM) algorithm, and called TV-EM. The TV-EM was developed for 2D SPECT imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. We have extended the TV-norm to take into account the third spatial dimension, and developed an iterative EM algorithm based on the three-dimensional (3D) TV-norm, which we call TV3D-EM. This takes into account the correlation between transaxial sections in SPECT, due to system resolution. We have compared the 2D and 3D algorithms using reconstructed images from simulated projection data. Phantoms used were a homogeneous sphere, and a 3D head phantom based on the Shepp-Logan phantom. The TV3D-EM algorithm yielded somewhat lower noise levels than TV-EM. The noise in the TV3D-EM had similar correlation in transaxial and longitudinal sections, which was not the case for TV-EM, or any 2D reconstruction method. In particular, longitudinal sections from TV3D-EM were perceived as less noisy when compared to TV-EM. The use of 3D reconstruction should also be advantageous if compensation for distant dependent collimator blurring is incorporated in the iterative algorithm

  2. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  3. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  4. Unambiguous results from variational matrix Pade approximants

    International Nuclear Information System (INIS)

    Pindor, Maciej.

    1979-10-01

    Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional

  5. Resource Constrained Planning of Multiple Projects with Separable Activities

    Science.gov (United States)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  6. Planck intermediate results XXIV. Constraints on variations in fundamental constants

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    2015-01-01

    cosmological probes. We conclude that independent time variations of the fine structure constant and of the mass of the electron are constrained by Planck to Δ Α/Α = (3.6±3.7) x 10-3 and Δ me/me = (4 ±11) x 10-3 at the 68% confidence level. We also investigate the possibility of a spatial variation of the fine...

  7. Minimizing dose variation from the interplay effect in stereotactic radiation therapy using volumetric modulated arc therapy for lung cancer.

    Science.gov (United States)

    Kubo, Kazuki; Monzen, Hajime; Tamura, Mikoto; Hirata, Makoto; Ishii, Kentaro; Okada, Wataru; Nakahara, Ryuta; Kishimoto, Shun; Kawamorita, Ryu; Nishimura, Yasumasa

    2018-03-01

    It is important to improve the magnitude of dose variation that is caused by the interplay effect. The aim of this study was to investigate the impact of the number of breaths (NBs) to the dose variation for VMAT-SBRT to lung cancer. Data on respiratory motion and multileaf collimator (MLC) sequence were collected from the cases of 30 patients who underwent radiotherapy with VMAT-SBRT for lung cancer. The NBs in the total irradiation time with VMAT and the maximum craniocaudal amplitude of the target were calculated. The MLC sequence complexity was evaluated using the modulation complexity score for VMAT (MCSv). Static and dynamic measurements were performed using a cylindrical respiratory motion phantom and a micro ionization chamber. The 1 standard deviation which were obtained from 10 dynamic measurements for each patient were defined as dose variation caused by the interplay effect. The dose distributions were also verified with radiochromic film to detect undesired hot and cold dose spot. Dose measurements were also performed with different NBs in the same plan for 16 patients in 30 patients. The correlations between dose variations and parameters assessed for each treatment plan including NBs, MCSv, the MCSv/amplitude quotient (TMMCSv), and the MCSv/amplitude quotient × NBs product (IVS) were evaluated. Dose variation was decreased with increasing NBs, and NBs of >40 times maintained the dose variation within 3% in 15 cases. The correlation between dose variation and IVS which were considered NBs was shown stronger (R 2  = 0.43, P 40 times during irradiation of two partial arcs VMAT (i.e., NBs = 16 breaths per minute) may be suitable for VMAT-SBRT for lung cancer. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Intraspecies variation in BMR does not affect estimates of early hominin total daily energy expenditure.

    Science.gov (United States)

    Froehle, Andrew W; Schoeninger, Margaret J

    2006-12-01

    We conducted a meta-analysis of 45 studies reporting basal metabolic rate (BMR) data for Homo sapiens and Pan troglodytes to determine the effects of sex, age, and latitude (a proxy for climate, in humans only). BMR was normalized for body size using fat-free mass in humans and body mass in chimpanzees. We found no effect of sex in either species and no age effect in chimpanzees. In humans, juveniles differed significantly from adults (ANCOVA: P BMR and body size, and used them to predict total daily energy expenditure (TEE) in four early hominin species. Our predictions concur with previous TEE estimates (i.e. Leonard and Robertson: Am J Phys Anthropol 102 (1997) 265-281), and support the conclusion that TEE increased greatly with H. erectus. Our results show that intraspecific variation in BMR does not affect TEE estimates for interspecific comparisons. Comparisons of more closely related groups such as humans and Neandertals, however, may benefit from consideration of this variation. 2006 Wiley-Liss, Inc.

  9. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC

    2008-05-15

    We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.

  10. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    International Nuclear Information System (INIS)

    Núñez, Darío; Zavala, Jesús; Nellen, Lukas; Sussman, Roberto A; Cabral-Rosetti, Luis G; Mondragón, Myriam

    2008-01-01

    We derive an expression for the entropy of a dark matter halo described using a Navarro–Frenk–White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2σ bounds for the abundance of dark matter: 0.112≤Ω DM h 2 ≤0.122, we are able to clearly identify validity regions among the values of tanβ, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tanβ are not favored; only for tan β ≃ 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m χ ≥141 GeV

  11. Convergence Theorem for Finite Family of Total Asymptotically Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    E.U. Ofoedu

    2015-11-01

    Full Text Available In this paper we introduce an explicit iteration process and prove strong convergence of the scheme in a real Hilbert space $H$ to the common fixed point of finite family of total asymptotically nonexpansive mappings which is nearest to the point $u \\in H$.  Our results improve previously known ones obtained for the class of asymptotically nonexpansive mappings. As application, iterative method for: approximation of solution of variational Inequality problem, finite family of continuous pseudocontractive mappings, approximation of solutions of classical equilibrium problems and approximation of solutions of convex minimization problems are proposed. Our theorems unify and complement many recently announced results.

  12. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization

    DEFF Research Database (Denmark)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-01-01

    scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...

  13. A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford-Shah Image Segmentation

    Science.gov (United States)

    2016-05-01

    norm does not cap - ture the geometry completely. The L1−L2 in (c) does a better job than TV while L1 in (b) and L1−0.5L2 in (d) capture the squares most...and isotropic total variation (TV) norms into a relaxed formu- lation of the two phase Mumford-Shah (MS) model for image segmentation. We show...results exceeding those obtained by the MS model when using the standard TV norm to regular- ize partition boundaries. In particular, examples illustrating

  14. A min-max variational principle

    International Nuclear Information System (INIS)

    Georgiev, P.G.

    1995-11-01

    In this paper a variational principle for min-max problems is proved that is of the same spirit as Deville-Godefroy-Zizler's variational principle for minimization problems. A localization theorem in which the mini-max points for the perturbed function with respect top a given ε-min-max point are localized is presented. 3 refs

  15. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation

    Directory of Open Access Journals (Sweden)

    Gang Wang

    2018-05-01

    Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  16. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    Science.gov (United States)

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  17. Rank restriction for the variational calculation of two-electron reduced density matrices of many-electron atoms and molecules

    International Nuclear Information System (INIS)

    Naftchi-Ardebili, Kasra; Hau, Nathania W.; Mazziotti, David A.

    2011-01-01

    Variational minimization of the ground-state energy as a function of the two-electron reduced density matrix (2-RDM), constrained by necessary N-representability conditions, provides a polynomial-scaling approach to studying strongly correlated molecules without computing the many-electron wave function. Here we introduce a route to enhancing necessary conditions for N representability through rank restriction of the 2-RDM. Rather than adding computationally more expensive N-representability conditions, we directly enhance the accuracy of two-particle (2-positivity) conditions through rank restriction, which removes degrees of freedom in the 2-RDM that are not sufficiently constrained. We select the rank of the particle-hole 2-RDM by deriving the ranks associated with model wave functions, including both mean-field and antisymmetrized geminal power (AGP) wave functions. Because the 2-positivity conditions are exact for quantum systems with AGP ground states, the rank of the particle-hole 2-RDM from the AGP ansatz provides a minimum for its value in variational 2-RDM calculations of general quantum systems. To implement the rank-restricted conditions, we extend a first-order algorithm for large-scale semidefinite programming. The rank-restricted conditions significantly improve the accuracy of the energies; for example, the percentages of correlation energies recovered for HF, CO, and N 2 improve from 115.2%, 121.7%, and 121.5% without rank restriction to 97.8%, 101.1%, and 100.0% with rank restriction. Similar results are found at both equilibrium and nonequilibrium geometries. While more accurate, the rank-restricted N-representability conditions are less expensive computationally than the full-rank conditions.

  18. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  19. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  20. Constrained Supersymmetric Flipped SU(5) GUT Phenomenology

    CERN Document Server

    Ellis, John; Olive, Keith A

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, $M_{in}$, above the GUT scale, $M_{GUT}$. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino and the lighter stau is sensitive to $M_{in}$, as is the relationship between the neutralino mass and the masses of the heavier Higgs bosons. For these reasons, prominent features in generic $(m_{1/2}, m_0)$ planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to $M_{in}$, as we illustrate for several cases with tan(beta)...

  1. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  2. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells.

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-08-09

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs.

  3. Numerical solution of large nonlinear boundary value problems by quadratic minimization techniques

    International Nuclear Information System (INIS)

    Glowinski, R.; Le Tallec, P.

    1984-01-01

    The objective of this paper is to describe the numerical treatment of large highly nonlinear two or three dimensional boundary value problems by quadratic minimization techniques. In all the different situations where these techniques were applied, the methodology remains the same and is organized as follows: 1) derive a variational formulation of the original boundary value problem, and approximate it by Galerkin methods; 2) transform this variational formulation into a quadratic minimization problem (least squares methods) or into a sequence of quadratic minimization problems (augmented lagrangian decomposition); 3) solve each quadratic minimization problem by a conjugate gradient method with preconditioning, the preconditioning matrix being sparse, positive definite, and fixed once for all in the iterative process. This paper will illustrate the methodology above on two different examples: the description of least squares solution methods and their application to the solution of the unsteady Navier-Stokes equations for incompressible viscous fluids; the description of augmented lagrangian decomposition techniques and their application to the solution of equilibrium problems in finite elasticity

  4. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  5. Spatial optimization of cropping pattern for sustainable food and biofuel production with minimal downstream pollution.

    Science.gov (United States)

    Femeena, P V; Sudheer, K P; Cibin, R; Chaubey, I

    2018-04-15

    Biofuel has emerged as a substantial source of energy in many countries. In order to avoid the 'food versus fuel competition', arising from grain-based ethanol production, the United States has passed regulations that require second generation or cellulosic biofeedstocks to be used for majority of the biofuel production by 2022. Agricultural residue, such as corn stover, is currently the largest source of cellulosic feedstock. However, increased harvesting of crops residue may lead to increased application of fertilizers in order to recover the soil nutrients lost from the residue removal. Alternatively, introduction of less-fertilizer intensive perennial grasses such as switchgrass (Panicum virgatum L.) and Miscanthus (Miscanthus x giganteus Greef et Deu.) can be a viable source for biofuel production. Even though these grasses are shown to reduce nutrient loads to a great extent, high production cost have constrained their wide adoptability to be used as a viable feedstock. Nonetheless, there is an opportunity to optimize feedstock production to meet bioenergy demand while improving water quality. This study presents a multi-objective simulation optimization framework using Soil and Water Assessment Tool (SWAT) and Multi Algorithm Genetically Adaptive Method (AMALGAM) to develop optimal cropping pattern with minimum nutrient delivery and minimum biomass production cost. Computational time required for optimization was significantly reduced by loose coupling SWAT with an external in-stream solute transport model. Optimization was constrained by food security and biofuel production targets that ensured not more than 10% reduction in grain yield and at least 100 million gallons of ethanol production. A case study was carried out in St. Joseph River Watershed that covers 280,000 ha area in the Midwest U.S. Results of the study indicated that introduction of corn stover removal and perennial grass production reduce nitrate and total phosphorus loads without

  6. Minimal mirror twin Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

  7. Frictional granular mechanics: A variational approach

    Energy Technology Data Exchange (ETDEWEB)

    Holtzman, R.; Silin, D.B.; Patzek, T.W.

    2009-10-16

    The mechanical properties of a cohesionless granular material are evaluated from grain-scale simulations. Intergranular interactions, including friction and sliding, are modeled by a set of contact rules based on the theories of Hertz, Mindlin, and Deresiewicz. A computer generated, three-dimensional, irregular pack of spherical grains is loaded by incremental displacement of its boundaries. Deformation is described by a sequence of static equilibrium configurations of the pack. A variational approach is employed to find the equilibrium configurations by minimizing the total work against the intergranular loads. Effective elastic moduli are evaluated from the intergranular forces and the deformation of the pack. Good agreement between the computed and measured moduli, achieved with no adjustment of material parameters, establishes the physical soundness of the proposed model.

  8. Variational integrators for electric circuits

    International Nuclear Information System (INIS)

    Ober-Blöbaum, Sina; Tao, Molei; Cheng, Mulin; Owhadi, Houman; Marsden, Jerrold E.

    2013-01-01

    In this contribution, we develop a variational integrator for the simulation of (stochastic and multiscale) electric circuits. When considering the dynamics of an electric circuit, one is faced with three special situations: 1. The system involves external (control) forcing through external (controlled) voltage sources and resistors. 2. The system is constrained via the Kirchhoff current (KCL) and voltage laws (KVL). 3. The Lagrangian is degenerate. Based on a geometric setting, an appropriate variational formulation is presented to model the circuit from which the equations of motion are derived. A time-discrete variational formulation provides an iteration scheme for the simulation of the electric circuit. Dependent on the discretization, the intrinsic degeneracy of the system can be canceled for the discrete variational scheme. In this way, a variational integrator is constructed that gains several advantages compared to standard integration tools for circuits; in particular, a comparison to BDF methods (which are usually the method of choice for the simulation of electric circuits) shows that even for simple LCR circuits, a better energy behavior and frequency spectrum preservation can be observed using the developed variational integrator

  9. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  10. Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

    International Nuclear Information System (INIS)

    Bélanger, G.; Hugonie, C.; Pukhov, A.

    2009-01-01

    We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and susy particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC

  11. Constrained generalized mechanics. The second-order case

    International Nuclear Information System (INIS)

    Tapia, V.

    1985-01-01

    The Dirac formalism for constrained systems is developed for systems described by a Lagrangian depending on up to a second-order time derivatives of the generalized co-ordinates (accelerations). It turns out that for a Lagrangian of this kind differing by a total time derivative from a Lagrangian depending on only up to first-order time-derivatives of the generalized co-ordinates (velocities), both classical mechanics at the Lagrangian level are the same; at the Hamiltonian level the two classical mechanics differ conceptually even when the solutions to both sets of Hamiltonian equations of motion are the same

  12. Loss Minimization and Voltage Control in Smart Distribution Grid

    DEFF Research Database (Denmark)

    Juelsgaard, Morten; Sloth, Christoffer; Wisniewski, Rafal

    2014-01-01

    This work presents a strategy for increasing the installation of electric vehicles and solar panels in low-voltage grids, while obeying voltage variation constraints. Our approach employs minimization of active power losses for coordinating consumption and generation of power, as well as reactive...

  13. Effective theory of flavor for Minimal Mirror Twin Higgs

    Science.gov (United States)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  14. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  15. A survey on classical minimal surface theory

    CERN Document Server

    Meeks, William H

    2012-01-01

    Meeks and Pérez present a survey of recent spectacular successes in classical minimal surface theory. The classification of minimal planar domains in three-dimensional Euclidean space provides the focus of the account. The proof of the classification depends on the work of many currently active leading mathematicians, thus making contact with much of the most important results in the field. Through the telling of the story of the classification of minimal planar domains, the general mathematician may catch a glimpse of the intrinsic beauty of this theory and the authors' perspective of what is happening at this historical moment in a very classical subject. This book includes an updated tour through some of the recent advances in the theory, such as Colding-Minicozzi theory, minimal laminations, the ordering theorem for the space of ends, conformal structure of minimal surfaces, minimal annular ends with infinite total curvature, the embedded Calabi-Yau problem, local pictures on the scale of curvature and t...

  16. Recent Changes in Global Photosynthesis and Terrestrial Ecosystem Respiration Constrained From Multiple Observations

    Science.gov (United States)

    Li, Wei; Ciais, Philippe; Wang, Yilong; Yin, Yi; Peng, Shushi; Zhu, Zaichun; Bastos, Ana; Yue, Chao; Ballantyne, Ashley P.; Broquet, Grégoire; Canadell, Josep G.; Cescatti, Alessandro; Chen, Chi; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Piao, Shilong

    2018-01-01

    To assess global carbon cycle variability, we decompose the net land carbon sink into the sum of gross primary productivity (GPP), terrestrial ecosystem respiration (TER), and fire emissions and apply a Bayesian framework to constrain these fluxes between 1980 and 2014. The constrained GPP and TER fluxes show an increasing trend of only half of the prior trend simulated by models. From the optimization, we infer that TER increased in parallel with GPP from 1980 to 1990, but then stalled during the cooler periods, in 1990-1994 coincident with the Pinatubo eruption, and during the recent warming hiatus period. After each of these TER stalling periods, TER is found to increase faster than GPP, explaining a relative reduction of the net land sink. These results shed light on decadal variations of GPP and TER and suggest that they exhibit different responses to temperature anomalies over the last 35 years.

  17. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  18. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  19. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  20. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    International Nuclear Information System (INIS)

    Kim, Hojin; Becker, Stephen; Lee, Rena; Lee, Soonhyouk; Shin, Sukyoung; Candès, Emmanuel; Xing Lei; Li Ruijiang

    2013-01-01

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments

  1. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin [Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 and Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Becker, Stephen [Laboratoire Jacques-Louis Lions, Universite Pierre et Marie Curie, Paris 6, 75005 France (France); Lee, Rena; Lee, Soonhyouk [Department of Radiation Oncology, School of Medicine, Ewha Womans University, Seoul 158-710 (Korea, Republic of); Shin, Sukyoung [Medtronic CV RDN R and D, Santa Rosa, California 95403 (United States); Candes, Emmanuel [Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Xing Lei; Li Ruijiang [Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments

  2. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise in 3D Reconstruction from Optical Projection Tomography

    Czech Academy of Sciences Publication Activity Database

    Michálek, Jan

    2015-01-01

    Roč. 21, č. 6 (2015), s. 1602-1615 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) LH13028; GA ČR(CZ) GA13-12412S Institutional support: RVO:67985823 Keywords : optical projection tomography * microscopy * artifacts * total variation * data mismatch Subject RIV: EA - Cell Biology Impact factor: 1.730, year: 2015

  4. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  5. Higher Integrability for Minimizers of the Mumford-Shah Functional

    Science.gov (United States)

    De Philippis, Guido; Figalli, Alessio

    2014-08-01

    We prove higher integrability for the gradient of local minimizers of the Mumford-Shah energy functional, providing a positive answer to a conjecture of De Giorgi (Free discontinuity problems in calculus of variations. Frontiers in pure and applied mathematics, North-Holland, Amsterdam, pp 55-62, 1991).

  6. Virtual Routing Function Allocation Method for Minimizing Total Network Power Consumption

    OpenAIRE

    Kenichiro Hida; Shin-Ichi Kuribayashi

    2016-01-01

    In a conventional network, most network devices, such as routers, are dedicated devices that do not have much variation in capacity. In recent years, a new concept of network functions virtualisation (NFV) has come into use. The intention is to implement a variety of network functions with software on general-purpose servers and this allows the network operator to select their capacities and locations without any constraints. This paper focuses on the allocation of NFV-based routing functions...

  7. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    Science.gov (United States)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely

  8. Constraining calcium isotope fractionation (δ44/40Ca) in modern and fossil scleractinian coral skeleton

    OpenAIRE

    Pretet, Chloé; Samankassou, Elias; Felis, Thomas; Reynaud, Stéphanie; Böhm, Florian; Eisenhauer, Anton; Ferrier-Pagès, Christine; Gattuso, Jean-Pierre; Camoin, Gilbert

    2013-01-01

    The present study investigates the influence of environmental (temperature, salinity) and biological (growth rate, inter-generic variations) parameters on calcium isotope fractionation (δ44/40Ca) in scleractinian coral skeleton to better constrain this record. Previous studies focused on the δ44/40Ca record in different marine organisms to reconstruct seawater composition or temperature, but only few studies investigated corals. This study presents measurements performed on modern corals f...

  9. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  10. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-01-01

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs. PMID:28773795

  11. Conjugated Polymers Via Direct Arylation Polymerization in Continuous Flow: Minimizing the Cost and Batch-to-Batch Variations for High-Throughput Energy Conversion.

    Science.gov (United States)

    Gobalasingham, Nemal S; Carlé, Jon E; Krebs, Frederik C; Thompson, Barry C; Bundgaard, Eva; Helgesen, Martin

    2017-11-01

    Continuous flow methods are utilized in conjunction with direct arylation polymerization (DArP) for the scaled synthesis of the roll-to-roll compatible polymer, poly[(2,5-bis(2-hexyldecyloxy)phenylene)-alt-(4,7-di(thiophen-2-yl)-benzo[c][1,2,5]thiadiazole)] (PPDTBT). PPDTBT is based on simple, inexpensive, and scalable monomers using thienyl-flanked benzothiadiazole as the acceptor, which is the first β-unprotected substrate to be used in continuous flow via DArP, enabling critical evaluation of the suitability of this emerging synthetic method for minimizing defects and for the scaled synthesis of high-performance materials. To demonstrate the usefulness of the method, DArP-prepared PPDTBT via continuous flow synthesis is employed for the preparation of indium tin oxide (ITO)-free and flexible roll-coated solar cells to achieve a power conversion efficiency of 3.5% for 1 cm 2 devices, which is comparable to the performance of PPDTBT polymerized through Stille cross coupling. These efforts demonstrate the distinct advantages of the continuous flow protocol with DArP avoiding use of toxic tin chemicals, reducing the associated costs of polymer upscaling, and minimizing batch-to-batch variations for high-quality material. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  13. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  14. Minimally invasive computer-navigated total hip arthroplasty, following the concept of femur first and combined anteversion: design of a blinded randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Woerner Michael

    2011-08-01

    Full Text Available Abstract Background Impingement can be a serious complication after total hip arthroplasty (THA, and is one of the major causes of postoperative pain, dislocation, aseptic loosening, and implant breakage. Minimally invasive THA and computer-navigated surgery were introduced several years ago. We have developed a novel, computer-assisted operation method for THA following the concept of "femur first"/"combined anteversion", which incorporates various aspects of performing a functional optimization of the cup position, and comprehensively addresses range of motion (ROM as well as cup containment and alignment parameters. Hence, the purpose of this study is to assess whether the artificial joint's ROM can be improved by this computer-assisted operation method. Second, the clinical and radiological outcome will be evaluated. Methods/Design A registered patient- and observer-blinded randomized controlled trial will be conducted. Patients between the ages of 50 and 75 admitted for primary unilateral THA will be included. Patients will be randomly allocated to either receive minimally invasive computer-navigated "femur first" THA or the conventional minimally invasive THA procedure. Self-reported functional status and health-related quality of life (questionnaires will be assessed both preoperatively and postoperatively. Perioperative complications will be registered. Radiographic evaluation will take place up to 6 weeks postoperatively with a computed tomography (CT scan. Component position will be evaluated by an independent external institute on a 3D reconstruction of the femur/pelvis using image-processing software. Postoperative ROM will be calculated by an algorithm which automatically determines bony and prosthetic impingements. Discussion In the past, computer navigation has improved the accuracy of component positioning. So far, there are only few objective data quantifying the risks and benefits of computer navigated THA. Therefore, this

  15. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  16. Minimizing total costs of forest roads with computer-aided design ...

    Indian Academy of Sciences (India)

    imum total road costs, while conforming to design specifications, environmental ..... quality, and enhancing fish and wildlife habitat, an appropriate design ..... Soil, Water and Timber Management: Forest Engineering Solutions in Response to.

  17. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  18. Homogenization of variational inequalities for obstacle problems

    International Nuclear Information System (INIS)

    Sandrakov, G V

    2005-01-01

    Results on the convergence of solutions of variational inequalities for obstacle problems are proved. The variational inequalities are defined by a non-linear monotone operator of the second order with periodic rapidly oscillating coefficients and a sequence of functions characterizing the obstacles. Two-scale and macroscale (homogenized) limiting variational inequalities are obtained. Derivation methods for such inequalities are presented. Connections between the limiting variational inequalities and two-scale and macroscale minimization problems are established in the case of potential operators.

  19. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    Science.gov (United States)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  20. Loss Minimization Sliding Mode Control of IPM Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Mehran Zamanifar

    2010-01-01

    Full Text Available In this paper, a nonlinear loss minimization control strategy for an interior permanent magnet synchronous motor (IPMSM based on a newly developed sliding mode approach is presented. This control method sets force the speed control of the IPMSM drives and simultaneously ensures the minimization of the losses besides the uncertainties exist in the system such as parameter variations which have undesirable effects on the controller performance except at near nominal conditions. Simulation results are presented to show the effectiveness of the proposed controller.

  1. Twentieth-Century Hydrometeorological Reconstructions to Study the Multidecadal Variations of the Water Cycle Over France

    Science.gov (United States)

    Bonnet, R.; Boé, J.; Dayon, G.; Martin, E.

    2017-10-01

    Characterizing and understanding the multidecadal variations of the continental hydrological cycle is a challenging issue given the limitation of observed data sets. In this paper, a new approach to derive twentieth century hydrological reconstructions over France with an hydrological model is presented. The method combines the results of long-term atmospheric reanalyses downscaled with a stochastic statistical method and homogenized station observations to derive the meteorological forcing needed for hydrological modeling. Different methodological choices are tested and evaluated. We show that using homogenized observations to constrain the results of statistical downscaling help to improve the reproduction of precipitation, temperature, and river flows variability. In particular, it corrects some unrealistic long-term trends associated with the atmospheric reanalyses. Observationally constrained reconstructions therefore constitute a valuable data set to study the multidecadal hydrological variations over France. Thanks to these reconstructions, we confirm that the multidecadal variations previously noted in French river flows have mainly a climatic origin. Moreover, we show that multidecadal variations exist in other hydrological variables (evapotranspiration, snow cover, and soil moisture). Depending on the region, the persistence from spring to summer of soil moisture or snow anomalies generated during spring by temperature and precipitation variations may explain river flows variations in summer, when no concomitant climate variations exist.

  2. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  3. Color seamlessness in multi-projector displays using constrained gamut morphing.

    Science.gov (United States)

    Sajadi, Behzad; Lazarov, Maxim; Majumder, Aditi; Gopi, M

    2009-01-01

    Multi-projector displays show significant spatial variation in 3D color gamut due to variation in the chromaticity gamuts across the projectors, vignetting effect of each projector and also overlap across adjacent projectors. In this paper we present a new constrained gamut morphing algorithm that removes all these variations and results in true color seamlessness across tiled multiprojector displays. Our color morphing algorithm adjusts the intensities of light from each pixel of each projector precisely to achieve a smooth morphing from one projector's gamut to the other's through the overlap region. This morphing is achieved by imposing precise constraints on the perceptual difference between the gamuts of two adjacent pixels. In addition, our gamut morphing assures a C1 continuity yielding visually pleasing appearance across the entire display.We demonstrate our method successfully on a planar and a curved display using both low and high-end projectors. Our approach is completely scalable, efficient and automatic. We also demonstrate the real-time performance of our image correction algorithm on GPUs for interactive applications. To the best of our knowledge, this is the first work that presents a scalable method with a strong foundation in perception and realizes, for the first time, a truly seamless display where the number of projectors cannot be deciphered.

  4. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  5. Wannier-function-based constrained DFT with nonorthogonality-correcting Pulay forces in application to the reorganization effects in graphene-adsorbed pentacene

    Science.gov (United States)

    Roychoudhury, Subhayan; O'Regan, David D.; Sanvito, Stefano

    2018-05-01

    Pulay terms arise in the Hellmann-Feynman forces in electronic-structure calculations when one employs a basis set made of localized orbitals that move with their host atoms. If the total energy of the system depends on a subspace population defined in terms of the localized orbitals across multiple atoms, then unconventional Pulay terms will emerge due to the variation of the orbital nonorthogonality with ionic translation. Here, we derive the required exact expressions for such terms, which cannot be eliminated by orbital orthonormalization. We have implemented these corrected ionic forces within the linear-scaling density functional theory (DFT) package onetep, and we have used constrained DFT to calculate the reorganization energy of a pentacene molecule adsorbed on a graphene flake. The calculations are performed by including ensemble DFT, corrections for periodic boundary conditions, and empirical Van der Waals interactions. For this system we find that tensorially invariant population analysis yields an adsorbate subspace population that is very close to integer-valued when based upon nonorthogonal Wannier functions, and also but less precisely so when using pseudoatomic functions. Thus, orbitals can provide a very effective population analysis for constrained DFT. Our calculations show that the reorganization energy of the adsorbed pentacene is typically lower than that of pentacene in the gas phase. We attribute this effect to steric hindrance.

  6. Optimal experiment design for quantum state tomography: Fair, precise, and minimal tomography

    International Nuclear Information System (INIS)

    Nunn, J.; Smith, B. J.; Puentes, G.; Walmsley, I. A.; Lundeen, J. S.

    2010-01-01

    Given an experimental setup and a fixed number of measurements, how should one take data to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al.[R. Kosut, I. Walmsley, and H. Rabitz, e-print arXiv:quant-ph/0411093 (2004)]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. Monte Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum-likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.

  7. Development of NIR calibration models to assess year-to-year variation in total non-structural carbohydrates in grasses using PLSR

    DEFF Research Database (Denmark)

    Shetty, Nisha; Gislum, René; Jensen, Anne Mette Dahl

    2012-01-01

    Near-infrared (NIR) spectroscopy was used in combination with chemometrics to quantify total nonstructural carbohydrates (TNC) in grass samples in order to overcome year-to-year variation. A total of 1103 above-ground plant and root samples were collected from different field and pot experiments...... and with various experimental designs in the period from 2001 to 2005. A calibration model was developed using partial least squares regression (PLSR). The calibration model on a large data set spanning five years demonstrated that quantification of TNC using NIR spectroscopy was possible with an acceptable low...

  8. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    Science.gov (United States)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work

  9. Solar system tests for realistic f(T) models with non-minimal torsion-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Rui-Hui; Zhai, Xiang-Hua; Li, Xin-Zhou [Shanghai Normal University, Shanghai United Center for Astrophysics (SUCA), Shanghai (China)

    2017-08-15

    In the previous paper, we have constructed two f(T) models with non-minimal torsion-matter coupling extension, which are successful in describing the evolution history of the Universe including the radiation-dominated era, the matter-dominated era, and the present accelerating expansion. Meantime, the significant advantage of these models is that they could avoid the cosmological constant problem of ΛCDM. However, the non-minimal coupling between matter and torsion will affect the tests of the Solar system. In this paper, we study the effects of the Solar system in these models, including the gravitation redshift, geodetic effect and perihelion precession. We find that Model I can pass all three of the Solar system tests. For Model II, the parameter is constrained by the uncertainties of the planets' estimated perihelion precessions. (orig.)

  10. Minimally processed vegetable salads: microbial quality evaluation.

    Science.gov (United States)

    Fröder, Hans; Martins, Cecília Geraldes; De Souza, Katia Leani Oliveira; Landgraf, Mariza; Franco, Bernadette D G M; Destro, Maria Teresa

    2007-05-01

    The increasing demand for fresh fruits and vegetables and for convenience foods is causing an expansion of the market share for minimally processed vegetables. Among the more common pathogenic microorganisms that can be transmitted to humans by these products are Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella. The aim of this study was to evaluate the microbial quality of a selection of minimally processed vegetables. A total of 181 samples of minimally processed leafy salads were collected from retailers in the city of Sao Paulo, Brazil. Counts of total coliforms, fecal coliforms, Enterobacteriaceae, psychrotrophic microorganisms, and Salmonella were conducted for 133 samples. L. monocytogenes was assessed in 181 samples using the BAX System and by plating the enrichment broth onto Palcam and Oxford agars. Suspected Listeria colonies were submitted to classical biochemical tests. Populations of psychrotrophic microorganisms >10(6) CFU/g were found in 51% of the 133 samples, and Enterobacteriaceae populations between 10(5) and 106 CFU/g were found in 42% of the samples. Fecal coliform concentrations higher than 10(2) CFU/g (Brazilian standard) were found in 97 (73%) of the samples, and Salmonella was detected in 4 (3%) of the samples. Two of the Salmonella-positive samples had minimally processed vegetables had poor microbiological quality, and these products could be a vehicle for pathogens such as Salmonella and L. monocytogenes.

  11. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  12. Uncertainties in constraining low-energy constants from {sup 3}H β decay

    Energy Technology Data Exchange (ETDEWEB)

    Klos, P.; Carbone, A.; Hebeler, K. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Menendez, J. [University of Tokyo, Department of Physics, Tokyo (Japan); Schwenk, A. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2017-08-15

    We discuss the uncertainties in constraining low-energy constants of chiral effective field theory from {sup 3}H β decay. The half-life is very precisely known, so that the Gamow-Teller matrix element has been used to fit the coupling c{sub D} of the axial-vector current to a short-range two-nucleon pair. Because the same coupling also describes the leading one-pion-exchange three-nucleon force, this in principle provides a very constraining fit, uncorrelated with the {sup 3}H binding energy fit used to constrain another low-energy coupling in three-nucleon forces. However, so far such {sup 3}H half-life fits have only been performed at a fixed cutoff value. We show that the cutoff dependence due to the regulators in the axial-vector two-body current can significantly affect the Gamow-Teller matrix elements and consequently also the extracted values for the c{sub D} coupling constant. The degree of the cutoff dependence is correlated with the softness of the employed NN interaction. As a result, present three-nucleon forces based on a fit to {sup 3}H β decay underestimate the uncertainty in c{sub D}. We explore a range of c{sub D} values that is compatible within cutoff variation with the experimental {sup 3}H half-life and estimate the resulting uncertainties for many-body systems by performing calculations of symmetric nuclear matter. (orig.)

  13. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  14. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  15. Total knee replacement with retention of both cruciate ligaments: a 22-year follow-up study.

    Science.gov (United States)

    Sabouret, P; Lavoie, F; Cloutier, J-M

    2013-07-01

    We report on the long-term results of 163 bicruciate-retaining Hermes 2C total knee replacements in 130 patients at a mean follow-up of 22.4 years (20.3 to 23.5). Even when the anterior cruciate ligament had a partially degenerative appearance it was preserved as long as the knee had a normal anterior drawer and Lachman's test pre-operatively. The description and surgical technique of this minimally constrained prosthesis were published in 1983 and the ten-year clinical results in 1999. A total of 12% of the knees (20 of 163) in this study were revised because of wear of the polyethylene tibial insert. Excellent stability was achieved and the incidence of aseptic component loosening was 4.3% (seven of 163). The survival rate using revision for any reason as the endpoint was 82% (95% confidence interval 76.2 to 88.0). Although this series included a relatively small number of replacements, it demonstrated that the anterior cruciate ligament, even when partially degenerated at the time of TKR, remained functional and provided adequate stability at a long-term follow-up.

  16. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  17. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    Science.gov (United States)

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) dysfunction due to degenerative sacroiliitis or SIJ disruption.

  18. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  19. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  20. Optimal Control of Evolution Mixed Variational Inclusions

    Energy Technology Data Exchange (ETDEWEB)

    Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx [Universidad Nacional Autónoma de México, Departamento de Recursos Naturales, Instituto de Geofísica (Mexico)

    2013-12-15

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.

  1. Optimal Control of Evolution Mixed Variational Inclusions

    International Nuclear Information System (INIS)

    Alduncin, Gonzalo

    2013-01-01

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory

  2. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  4. Branch xylem density variations across the Amazon Basin

    Science.gov (United States)

    Patiño, S.; Lloyd, J.; Paiva, R.; Baker, T. R.; Quesada, C. A.; Mercado, L. M.; Schmerler, J.; Schwarz, M.; Santos, A. J. B.; Aguilar, A.; Czimczik, C. I.; Gallo, J.; Horna, V.; Hoyos, E. J.; Jimenez, E. M.; Palomino, W.; Peacock, J.; Peña-Cruz, A.; Sarmiento, C.; Sota, A.; Turriago, J. D.; Villanueva, B.; Vitzthum, P.; Alvarez, E.; Arroyo, L.; Baraloto, C.; Bonal, D.; Chave, J.; Costa, A. C. L.; Herrera, R.; Higuchi, N.; Killeen, T.; Leal, E.; Luizão, F.; Meir, P.; Monteagudo, A.; Neil, D.; Núñez-Vargas, P.; Peñuela, M. C.; Pitman, N.; Priante Filho, N.; Prieto, A.; Panfil, S. N.; Rudas, A.; Salomão, R.; Silva, N.; Silveira, M.; Soares Dealmeida, S.; Torres-Lezama, A.; Vásquez-Martínez, R.; Vieira, I.; Malhi, Y.; Phillips, O. L.

    2009-04-01

    Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m-3 for a Cordia sagotii (Boraginaceae) from Mountagne de Tortue, French Guiana to 1130 kg m-3 for an Aiouea sp. (Lauraceae) from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species) accounted for 33% with environment (geographic location and plot) accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  5. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  6. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  7. Open Maximal Mucosa-Sparing Functional Total Laryngectomy

    Directory of Open Access Journals (Sweden)

    Pavel Dulguerov

    2017-10-01

    Full Text Available BackgroundTotal laryngectomy after (chemoradiotherapy is associated with a high incidence of fistula and therefore flaps are advocated. The description of a transoral robotic total laryngectomy prompted us to develop similar minimally invasive open approaches for functional total laryngectomy.MethodsA retrospective study of consecutive unselected patients with a dysfunctional larynx after (chemoradiation that underwent open maximal mucosal-sparing functional total laryngectomy (MMSTL between 2014 and 2016 is presented. The surgical technique is described, and the complications and functional outcome are reviewed.ResultsThe cohorts included 10 patients who underwent open MMSTL. No pedicled flap was used. Only one postoperative fistula was noted (10%. All patients resumed oral diet and experienced a functional tracheo-esophageal voice.ConclusionMMSTL could be used to perform functional total laryngectomy without a robot and with minimal incidence of complications.

  8. Nerve Cells Decide to Orient inside an Injectable Hydrogel with Minimal Structural Guidance.

    Science.gov (United States)

    Rose, Jonas C; Cámara-Torres, María; Rahimi, Khosrow; Köhler, Jens; Möller, Martin; De Laporte, Laura

    2017-06-14

    Injectable biomaterials provide the advantage of a minimally invasive application but mostly lack the required structural complexity to regenerate aligned tissues. Here, we report a new class of tissue regenerative materials that can be injected and form an anisotropic matrix with controlled dimensions using rod-shaped, magnetoceptive microgel objects. Microgels are doped with small quantities of superparamagnetic iron oxide nanoparticles (0.0046 vol %), allowing alignment by external magnetic fields in the millitesla order. The microgels are dispersed in a biocompatible gel precursor and after injection and orientation are fixed inside the matrix hydrogel. Regardless of the low volume concentration of the microgels below 3%, at which the geometrical constrain for orientation is still minimum, the generated macroscopic unidirectional orientation is strongly sensed by the cells resulting in parallel nerve extension. This finding opens a new, minimal invasive route for therapy after spinal cord injury.

  9. Minimally allowed neutrinoless double beta decay rates within an anarchical framework

    International Nuclear Information System (INIS)

    Jenkins, James

    2009-01-01

    Neutrinoless double beta decay (ββ0ν) is the only realistic probe of the Majorana nature of the neutrino. In the standard picture, its rate is proportional to m ee , the e-e element of the Majorana neutrino mass matrix in the flavor basis. I explore minimally allowed m ee values within the framework of mass matrix anarchy where neutrino parameters are defined statistically at low energies. Distributions of mixing angles are well defined by the Haar integration measure, but masses are dependent on arbitrary weighting functions and boundary conditions. I survey the integration measure parameter space and find that for sufficiently convergent weightings, m ee is constrained between (0.01-0.4) eV at 90% confidence. Constraints from neutrino mixing data lower these bounds. Singular integration measures allow for arbitrarily small m ee values with the remaining elements ill-defined, but this condition constrains the flavor structure of the model's ultraviolet completion. ββ0ν bounds below m ee ∼5x10 -3 eV should indicate symmetry in the lepton sector, new light degrees of freedom, or the Dirac nature of the neutrino.

  10. Variational Integrals of a Class of Nonhomogeneous -Harmonic Equations

    Directory of Open Access Journals (Sweden)

    Guanfeng Li

    2014-01-01

    Full Text Available We introduce a class of variational integrals whose Euler equations are nonhomogeneous -harmonic equations. We investigate the relationship between the minimization problem and the Euler equation and give a simple proof of the existence of some nonhomogeneous -harmonic equations by applying direct methods of the calculus of variations. Besides, we establish some interesting results on variational integrals.

  11. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  12. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  13. Energy density of marine pelagic fish eggs

    DEFF Research Database (Denmark)

    Riis-Vestergaard, J.

    2002-01-01

    Analysis of the literature on pelagic fish eggs enabled generalizations to be made of their energy densities, because the property of being buoyant in sea water appears to constrain the proximate composition of the eggs and thus to minimize interspecific variation. An energy density of 1.34 J mul......(-1) of total egg volume is derived for most species spawning eggs without visible oil globules. The energy density of eggs with oil globules is predicted by (σ) over cap = 1.34 + 40.61 x (J mul(-1)) where x is the fractional volume of the oil globule. (C) 2002 The Fisheries Society of the British...

  14. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  15. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  16. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    International Nuclear Information System (INIS)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry

    2015-01-01

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments

  17. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Ratcliff, Laura E., E-mail: lratcliff@anl.gov [Argonne Leadership Computing Facility, Argonne National Laboratory, Lemont, Illinois 60439 (United States); Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry [Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France)

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.

  18. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  19. Scheduling Aircraft Landings under Constrained Position Shifting

    Science.gov (United States)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  20. Should we still believe in constrained supersymmetry?

    International Nuclear Information System (INIS)

    Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin

    2013-01-01

    We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)

  1. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method.

    Science.gov (United States)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Solution of problems in calculus of variations via He's variational iteration method

    International Nuclear Information System (INIS)

    Tatari, Mehdi; Dehghan, Mehdi

    2007-01-01

    In the modeling of a large class of problems in science and engineering, the minimization of a functional is appeared. Finding the solution of these problems needs to solve the corresponding ordinary differential equations which are generally nonlinear. In recent years He's variational iteration method has been attracted a lot of attention of the researchers for solving nonlinear problems. This method finds the solution of the problem without any discretization of the equation. Since this method gives a closed form solution of the problem and avoids the round off errors, it can be considered as an efficient method for solving various kinds of problems. In this research He's variational iteration method will be employed for solving some problems in calculus of variations. Some examples are presented to show the efficiency of the proposed technique

  4. Seasonal variations of total 234Th and dissolved 238U concentration activities in surface water of Bransfield Strait, Antarctica, from March to October 2011

    International Nuclear Information System (INIS)

    Lapa, Flavia V.; Oliveira, Joselene de; Costa, Alice M.R.; Braga, Elisabete S.

    2013-01-01

    In this study the naturally occurring radionuclides 234 Th and 238 U were used to investigate the magnitude of upper ocean particulate organic carbon export in Bransfield Strait, Southern Ocean. This region is the largest oceanic high-nitrate low-chlorophyll (HNLC) area in the world and is known to contribute to regulate of the atmospheric CO 2 via the biological pump. Due to its different geochemical behavior in seawater, the resulting U/Th disequilibria can be easily used to constrain the transport rates of particles and reaction processes between solution and particulate phases. Sampling occurred during the summer (March and November) 2011. Total 234 Th activities in surface seawater samples ranged from 1.3 to 3.7 dpm L -1 (station EB 011) during March/11 campaign, while in October/11 total 234 Th activity concentrations varied from 1.4 to 2.9 dpm L -1 . Highest total 234 Th activities were found late in the austral summer season. Activity concentrations of dissolved 238 U in surface seawater varied from 2.1 to 2.4 dpm L -1 . Taking into account all sampling stations established in March and October/11 the relative variability of total 234 Th distribution was 22%. (author)

  5. Varying couplings in the early universe: Correlated variations of α and G

    International Nuclear Information System (INIS)

    Martins, C. J. A. P.; Menegoni, Eloisa; Galli, Silvia; Mangano, Gianpiero; Melchiorri, Alessandro

    2010-01-01

    The cosmic microwave background anisotropies provide a unique opportunity to constrain simultaneous variations of the fine-structure constant α and Newton's gravitational constant G. Those correlated variations are possible in a wide class of theoretical models. In this brief paper we show that the current data, assuming that particle masses are constant, give no clear indication for such variations, but already prefer that any relative variations in α should be of the same sign of those of G for variations of ∼1%. We also show that a cosmic complementarity is present with big bang nucleosynthesis and that a combination of current CMB and big bang nucleosynthesis data strongly constraints simultaneous variations in α and G. We finally discuss the future bounds achievable by the Planck satellite mission.

  6. On a Volume Constrained for the First Eigenvalue of the P-Laplacian Operator

    International Nuclear Information System (INIS)

    Ly, Idrissa

    2009-10-01

    In this paper, we are interested in a shape optimization problem which consists in minimizing the functional that associates to an open set the first eigenvalue for p-Laplacian operator with homogeneous boundary condition. The minimum is taken among all open subsets with prescribed measure of a given bounded domain. We study an existence result for the associate variational problem. Our technique consists in enlarging the class of admissible functions to the whole space W 0 1,p (D), penalizing those functions whose level sets have a measure which is less than those required. In fact, we study the minimizers of a family of penalized functionals J λ , λ > 0 showing they are Hoelder continuous. And we prove that such functions minimize the initial problem provided the penalization parameter λ is large enough. (author)

  7. Assessment of changes in plasma total antioxidant status in gamma irradiated rats treated with eugenol

    International Nuclear Information System (INIS)

    Azab, Kh. SH.

    2002-01-01

    Eugenol, a volatile phenolic phyto chemical, is a major constituent of clove oil. The present study was carried out to evaluate the antioxidant effect of eugenol on certain lipid metabolites and variations in the antioxidant status. In vitro study (oxidative susceptibility of lipoprotein) revealed that eugenol elongates the lag phase for the induction of conjugated diene and decreased the rate of lipid peroxidation (production of thiobarbituric reactive substances; TBARS) during the propagation phase. In vivo study on rats revealed a significant increase in plasma total antioxidant status after eugenol regime. Furthermore, eugenol water emulsion delivered to rats by garage in a concentration of 1 g/kg body weight for 15 days before and during exposure to fractionated whole body gamma radiation (1.5 Gy every other day) up to a total dose of 7.5 Gy showed that, administration of eugenol reduces significantly the concentration of plasma TBARS and minimize the decrease in plasma antioxidants. Amelioration in the concentration of reduced glutathione (GSH) in blood and liver and the activities of cytosolic glutathione-S-transferase (GST) in the liver were also observed. Furthermore, the changes in the concentrations of total cholesterol, triglycerides, LDL-cholesterol and HDL-cholesterol were less pronounced. It could be postulated that by minimizing the decrease in antioxidant status, eugenol could prevents the radiation induce alterations in lipid metabolism

  8. An integer batch scheduling model considering learning, forgetting, and deterioration effects for a single machine to minimize total inventory holding cost

    Science.gov (United States)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2018-03-01

    This research deals with a single machine batch scheduling model considering the influenced of learning, forgetting, and machine deterioration effects. The objective of the model is to minimize total inventory holding cost, and the decision variables are the number of batches (N), batch sizes (Q[i], i = 1, 2, .., N) and the sequence of processing the resulting batches. The parts to be processed are received at the right time and the right quantities, and all completed parts must be delivered at a common due date. We propose a heuristic procedure based on the Lagrange method to solve the problem. The effectiveness of the procedure is evaluated by comparing the resulting solution to the optimal solution obtained from the enumeration procedure using the integer composition technique and shows that the average effectiveness is 94%.

  9. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  10. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  11. Parametrization consequences of constraining soil organic matter models by total carbon and radiocarbon using long-term field data

    Science.gov (United States)

    Menichetti, Lorenzo; Kätterer, Thomas; Leifeld, Jens

    2016-05-01

    Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial timescales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the Zürich Organic Fertilization Experiment (ZOFE) experiment, a > 60-year-old controlled cropland experiment in Switzerland, by utilizing SOC and SO14C time series. To represent different processes we applied five model structures, all stemming from a simple mother model (Introductory Carbon Balance Model - ICBM): (I) two decomposing pools, (II) an inert pool added, (III) three decomposing pools, (IV) two decomposing pools with a substrate control feedback on decomposition, (V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinetic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modeling efforts whenever SO14C

  12. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  13. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  14. Vertical Land Movements Constrained by Absolute Gravity Measurements

    Science.gov (United States)

    van Camp, M.; Williams, S. D.; Hinzen, K.; Camelbeeck, T.

    2009-05-01

    Repeated absolute gravity (AG) measurements have been performed across the tectonically active intraplate regions in Northwest Europe: the Ardenne and the Roer Graben. At most of the stations measurements were undertaken in 2000 and repeated twice a year. Analysis of these measurements, performed in Belgium and Germany, show that at all stations except Jülich, there is no detectable gravity variation higher than 10 nm s-2 at the 95% confidence level. This is equivalent to vertical movements of 5 mm/yr. Although not yet significant, the observed rates do not contradict the subsidence predicted by glacial isostatic adjustment models and provide an upper limit on the possible uplift of the Ardennes. In Jülich, a gravity rate of change of 36 nm -2/year, equivalent to 18 mm/yr, is at least in parts due to anthropogenic subsidence. The amplitudes of the seasonal variations range from 18±0.8 nm s-2 to 43±29 nm s-2, depending on the location. These variations should have a negligible effect on the long-term trend, but at the Membach reference station, were a longer time series is available, differences in the rates observed since 1996 and 1999 indicate that long-term environmental effects may influence the inferred trend. The observed seasonal effects also demonstrate the repeatability of AG measurements. This study indicates that, even in difficult conditions, AG measurements repeated once a year can resolve vertical land movements at a few mm level after 5 years. This also confirms the need to measure for decades, using accurate and stable geodetic techniques like AG, in order to constrain slow deformation processes.

  15. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  16. A multi-perspective view of genetic variation in Cameroon.

    Science.gov (United States)

    Coia, V; Brisighelli, F; Donati, F; Pascali, V; Boschi, I; Luiselli, D; Battaggia, C; Batini, C; Taglioli, L; Cruciani, F; Paoli, G; Capelli, C; Spedini, G; Destro-Bisol, G

    2009-11-01

    In this study, we report the genetic variation of autosomal and Y-chromosomal microsatellites in a large Cameroon population dataset (a total of 11 populations) and jointly analyze novel and previous genetic data (mitochondrial DNA and protein coding loci) taking geographic and cultural factors into consideration. The complex pattern of genetic variation of Cameroon can in part be described by contrasting two geographic areas (corresponding to the northern and southern part of the country), which differ substantially in environmental, biological, and cultural aspects. Northern Cameroon populations show a greater within- and among-group diversity, a finding that reflects the complex migratory patterns and the linguistic heterogeneity of this area. A striking reduction of Y-chromosomal genetic diversity was observed in some populations of the northern part of the country (Podokwo and Uldeme), a result that seems to be related to their demographic history rather than to sampling issues. By exploring patterns of genetic, geographic, and linguistic variation, we detect a preferential correlation between genetics and geography for mtDNA. This finding could reflect a female matrimonial mobility that is less constrained by linguistic factors than in males. Finally, we apply the island model to mitochondrial and Y-chromosomal data and obtain a female-to-male migration Nnu ratio that was more than double in the northern part of the country. The combined effect of the propensity to inter-populational admixture of females, favored by cultural contacts, and of genetic drift acting on Y-chromosomal diversity could account for the peculiar genetic pattern observed in northern Cameroon.

  17. Free time minimizers for the three-body problem

    Science.gov (United States)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  18. Robust bladder image registration by redefining data-term in total variational approach

    Science.gov (United States)

    Ali, Sharib; Daul, Christian; Galbrun, Ernest; Amouroux, Marine; Guillemin, François; Blondel, Walter

    2015-03-01

    Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.

  19. A Stochastic Multi-Objective Chance-Constrained Programming Model for Water Supply Management in Xiaoqing River Watershed

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2017-05-01

    Full Text Available In this paper, a stochastic multi-objective chance-constrained programming model (SMOCCP was developed for tackling the water supply management problem. Two objectives were included in this model, which are the minimization of leakage loss amounts and total system cost, respectively. The traditional SCCP model required the random variables to be expressed in the normal distributions, although their statistical characteristics were suitably reflected by other forms. The SMOCCP model allows the random variables to be expressed in log-normal distributions, rather than general normal form. Possible solution deviation caused by irrational parameter assumption was avoided and the feasibility and accuracy of generated solutions were ensured. The water supply system in the Xiaoqing River watershed was used as a study case for demonstration. Under the context of various weight combinations and probabilistic levels, many types of solutions are obtained, which are expressed as a series of transferred amounts from water sources to treated plants, from treated plants to reservoirs, as well as from reservoirs to tributaries. It is concluded that the SMOCCP model could reflect the sketch of the studied region and generate desired water supply schemes under complex uncertainties. The successful application of the proposed model is expected to be a good example for water resource management in other watersheds.

  20. Architecture of a minimal signaling pathway explains the T-cell response to a 1 million-fold variation in antigen affinity and dose

    Science.gov (United States)

    Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton

    2016-01-01

    T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose–response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways. PMID:27702900

  1. Circadian variation in serum free and total insulin-like growth factor (IGF)-I and IGF-II in untreated and treated acromegaly and growth hormone deficiency

    DEFF Research Database (Denmark)

    Skjaerbaek, Christian; Frystyk, Jan; Kaal, Andreas

    2000-01-01

    to the nocturnal increase in IGF binding protein-1. In this study we have investigated the circadian variation in circulating free IGF-I and IGF-II in patients with acromegaly and patients with adult onset growth hormone deficiency. PATIENTS: Seven acromegalic patients were studied with and without treatment...... no significant circadian variations in free IGF-I or free IGF-II in either of the two occasions. In contrast, there was a significant circadian variation of total IGF-I after adjustment for changes in plasma volume in both treated and untreated acromegaly and GH deficiency in all cases with a peak between 0300 h...

  2. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    Science.gov (United States)

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  4. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  5. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  6. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  7. How CMB and large-scale structure constrain chameleon interacting dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Boriero, Daniel [Fakultät für Physik, Universität Bielefeld, Universitätstr. 25, Bielefeld (Germany); Das, Subinoy [Indian Institute of Astrophisics, Bangalore, 560034 (India); Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.

  8. Computed tomography for preoperative planning in minimal-invasive total hip arthroplasty: Radiation exposure and cost analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huppertz, Alexander, E-mail: Alexander.Huppertz@charite.de [Imaging Science Institute Charite Berlin, Robert-Koch-Platz 7, D-10115 Berlin (Germany); Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Radmer, Sebastian, E-mail: s.radmer@immanuel.de [Department of Orthopedic Surgery and Rheumatology, Immanuel-Krankenhaus, Koenigstr. 63, D-14109, Berlin (Germany); Asbach, Patrick, E-mail: Patrick.Asbach@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Juran, Ralf, E-mail: ralf.juran@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Schwenke, Carsten, E-mail: carsten.schwenke@scossis.de [Biostatistician, Scossis Statistical Consulting, Zeltinger Str. 58G, D-13465 Berlin (Germany); Diederichs, Gerd, E-mail: gerd.diederichs@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Hamm, Bernd, E-mail: Bernd.Hamm@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Sparmann, Martin, E-mail: m.sparmann@immanuel.de [Department of Orthopedic Surgery and Rheumatology, Immanuel-Krankenhaus, Koenigstr. 63, D-14109, Berlin (Germany)

    2011-06-15

    Computed tomography (CT) was used for preoperative planning of minimal-invasive total hip arthroplasty (THA). 92 patients (50 males, 42 females, mean age 59.5 years) with a mean body-mass-index (BMI) of 26.5 kg/m{sup 2} underwent 64-slice CT to depict the pelvis, the knee and the ankle in three independent acquisitions using combined x-, y-, and z-axis tube current modulation. Arthroplasty planning was performed using 3D-Hip Plan (Symbios, Switzerland) and patient radiation dose exposure was determined. The effects of BMI, gender, and contralateral THA on the effective dose were evaluated by an analysis-of-variance. A process-cost-analysis from the hospital perspective was done. All CT examinations were of sufficient image quality for 3D-THA planning. A mean effective dose of 4.0 mSv (SD 0.9 mSv) modeled by the BMI (p < 0.0001) was calculated. The presence of a contralateral THA (9/92 patients; p = 0.15) and the difference between males and females were not significant (p = 0.08). Personnel involved were the radiologist (4 min), the surgeon (16 min), the radiographer (12 min), and administrative personnel (4 min). A CT operation time of 11 min and direct per-patient costs of 52.80 Euro were recorded. Preoperative CT for THA was associated with a slight and justifiable increase of radiation exposure in comparison to conventional radiographs and low per-patient costs.

  9. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  10. Less favourable climates constrain demographic strategies in plants.

    Science.gov (United States)

    Csergő, Anna M; Salguero-Gómez, Roberto; Broennimann, Olivier; Coutts, Shaun R; Guisan, Antoine; Angert, Amy L; Welk, Erik; Stott, Iain; Enquist, Brian J; McGill, Brian; Svenning, Jens-Christian; Violle, Cyrille; Buckley, Yvonne M

    2017-08-01

    Correlative species distribution models are based on the observed relationship between species' occurrence and macroclimate or other environmental variables. In climates predicted less favourable populations are expected to decline, and in favourable climates they are expected to persist. However, little comparative empirical support exists for a relationship between predicted climate suitability and population performance. We found that the performance of 93 populations of 34 plant species worldwide - as measured by in situ population growth rate, its temporal variation and extinction risk - was not correlated with climate suitability. However, correlations of demographic processes underpinning population performance with climate suitability indicated both resistance and vulnerability pathways of population responses to climate: in less suitable climates, plants experienced greater retrogression (resistance pathway) and greater variability in some demographic rates (vulnerability pathway). While a range of demographic strategies occur within species' climatic niches, demographic strategies are more constrained in climates predicted to be less suitable. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  11. L∞ Variational Problems with Running Costs and Constraints

    International Nuclear Information System (INIS)

    Aronsson, G.; Barron, E. N.

    2012-01-01

    Various approaches are used to derive the Aronsson–Euler equations for L ∞ calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson–Euler equation for the basic L ∞ problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  12. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  13. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  14. Benefit from the minimally invasive sinus technique.

    Science.gov (United States)

    Salama, N; Oakley, R J; Skilbeck, C J; Choudhury, N; Jacob, A

    2009-02-01

    Sinus drainage is impeded by the transition spaces that the anterior paranasal sinuses drain into, not the ostia themselves. Addressing the transition spaces and leaving the ostia intact, using the minimally invasive sinus technique, should reverse chronic rhinosinusitis. To assess patient benefit following use of the minimally invasive sinus technique for chronic rhinosinusitis. One hundred and forty-three consecutive patients underwent the minimally invasive sinus technique for chronic rhinosinusitis. Symptoms (i.e. blocked nose, poor sense of smell, rhinorrhoea, post-nasal drip, facial pain and sneezing) were recorded using a visual analogue scale, pre-operatively and at six and 12 weeks post-operatively. Patients were also surveyed using the Glasgow benefit inventory, one and three years post-operatively. We found a significant reduction in all nasal symptom scores at six and 12 weeks post-operatively, and increased total quality of life scores at one and three years post-operatively (25.2 and 14.8, respectively). The patient benefits of treatment with the minimally invasive sinus technique compare with the published patient benefits for functional endoscopic sinus surgery.

  15. Branch xylem density variations across the Amazon Basin

    Directory of Open Access Journals (Sweden)

    S. Patiño

    2009-04-01

    Full Text Available Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m−3 for a Cordia sagotii (Boraginaceae from Mountagne de Tortue, French Guiana to 1130 kg m−3 for an Aiouea sp. (Lauraceae from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species accounted for 33% with environment (geographic location and plot accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  16. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  17. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  18. Seasonal Variation of Total Mercury Burden in the American Alligator (Alligator Mississippiensis) at Merritt Island National Wildlife Refuge (MINWR), Florida

    Science.gov (United States)

    Nilsen, Frances M.; Dorsey, Jonathan E.; Long, Stephen E.; Schock, Tracey B.; Bowden, John A.; Lowers, Russell H.; Guillette, Louis J., Jr.

    2016-01-01

    Seasonal variation of mercury (Hg) is not well studied in free-ranging wildlife. Atmospheric deposition patterns of Hg have been studied in detail and have been modeled for both global and specific locations with great accuracy and correlates to environment impact. However, monitoring these trends in wildlife is complicated due to local environmental parameters (e.g., rainfall, humidity, pH, bacterial composition) that can affect the transformation of atmospheric Hg to the biologically available forms. Here, we utilized an abundant and healthy population of American alligators (Alligator mississippiensis) at Merritt Island National Wildlife Refuge (MINWR), FL, and assessed Hg burden in whole blood samples over a span of 7 years (2007 2014; n 174) in an effort to assess seasonal variation of total [Hg]. While the majority of this population is assumed healthy, 18 individuals with low body mass indices (BMI, defined in this study) were captured throughout the 7 year sampling period. These individual alligators exhibited [Hg] that were not consistent with the observed overall seasonal [Hg] variation, and were statistically different from the healthy population of alligators. The alligators with low BMI had elevated concentrations of Hg compared to their age/sex/season matched counterparts with normal BMI. Statistically significant differences were found between the winter and spring seasons for animals with normal BMI. The data in this report supports the conclusion that organismal total [Hg] do fluctuate directly with seasonal deposition rates as well as other seasonal environmental parameters, such as average rainfall and prevailing wind direction. This study highlights the unique environment of MINWR to permit annual assessment of apex predators, such as the American alligator, to determine detailed environmental impact of contaminants of concern.

  19. MINIMIZING GLOVEBOX GLOVE BREACHES, PART IV: CONTROL CHARTS

    International Nuclear Information System (INIS)

    Cournoyer, Michael E.; Lee, Michelle B.; Schreiber, Stephen B.

    2007-01-01

    At the Los Alamos National Laboratory (LANL) Plutonium Facility, plutonium. isotopes and other actinides are handled in a glovebox environment. The spread of radiological contamination, and excursions of contaminants into the worker's breathing zone, are minimized and/or prevented through the use of glovebox technology. Evaluating the glovebox configuration, the glovebo gloves are the most vulnerable part of this engineering control. Recognizing this vulnerability, the Glovebox Glove Integrity Program (GGIP) was developed to minimize and/or prevent unplanned openings in the glovebox environment, i.e., glove failures and breaches. In addition, LANL implement the 'Lean Six Sigma (LSS)' program that incorporates the practices of Lean Manufacturing and Six Sigma technologies and tools to effectively improve administrative and engineering controls and work processes. One tool used in LSS is the use of control charts, which is an effective way to characterize data collected from unplanned openings in the glovebox environment. The benefit management receives from using this tool is two-fold. First, control charts signal the absence or presence of systematic variations that result in process instability, in relation to glovebox glove breaches and failures. Second, these graphical representations of process variation detennine whether an improved process is under control. Further, control charts are used to identify statistically significant variations (trends) that can be used in decision making to improve processes. This paper discusses performance indicators assessed by the use control charts, provides examples of control charts, and shows how managers use the results to make decisions. This effort contributes to LANL Continuous Improvement Program by improving the efficiency, cost effectiveness, and formality of glovebox operations.

  20. Episode of Care Payments in Total Joint Arthroplasty and Cost Minimization Strategies.

    Science.gov (United States)

    Nwachukwu, Benedict U; O'Donnell, Evan; McLawhorn, Alexander S; Cross, Michael B

    2016-02-01

    Total joint arthroplasty (TJA) is receiving significant attention in the US health care system for cost containment strategies. Specifically, payer organizations have embraced and are implementing bundled payment schemes in TJA. Consequently, hospitals and providers involved in the TJA care cycle have sought to adapt to the new financial pressures imposed by episode of care payment models by analyzing what components of the total "event" of a TJA are most essential to achieve a good outcome after TJA. As part of this review, we analyze and discuss a health economic study by Snow et al. As part of their study, the authors aimed to understand the association between preoperative physical therapy (PT) and post-acute care resource utilization, and its effect on the total cost of care during total joint arthroplasty. The purpose of this current review therefore is to (1) describe and analyze the findings presented by Snow et al. and (2) provide a framework for analyzing and critiquing economic analyses in orthopedic surgery. The study under review, while having important strengths, has several notable limitations that are important to keep in mind when making policy and coverage decisions. We support cautious interpretation and application of study results, and we encourage maintained attention to economic analysis in orthopedics as well as continued care path redesign to maximize value for patients and health care providers.

  1. Variational multiscale models for charge transport.

    Science.gov (United States)

    Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin

    2012-01-01

    This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle

  2. Variational multiscale models for charge transport

    Science.gov (United States)

    Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin

    2012-01-01

    This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle

  3. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  4. Advanced Purex process and waste minimization at La Hague

    International Nuclear Information System (INIS)

    Masson, H.; Nouguier, H.; Bernard, C.; Runge, S.

    1993-01-01

    After a brief recall of the different aspects of the commercial irradiated fuel reprocessing, this paper presents the achievements of the recently commissioned UP3 plant at La Hague. The advanced Purex process implemented with a total waste management results in important waste volume minimization, so that the total volume of high-level and transuranic waste is lower than what it would be in a once-through cycle. Moreover, further minimization is still possible, based on an improved waste management. Cogema has launched the necessary program, which will lead to an overall volume of HLW and TRU wastes of less than 1 m 3 /t by the end of the decade, the maximum possible activity being concentrated in the glass

  5. Hybrid genetic algorithm for minimizing non productive machining ...

    African Journals Online (AJOL)

    user

    The movement of tool is synchronized with the help of these CNC codes. Total ... Lot of work has been reported for minimizing the productive time by ..... Optimal path for automated drilling operations by a new heuristic approach using particle.

  6. Variation in the cost of care for primary total knee arthroplasties

    Directory of Open Access Journals (Sweden)

    Derek A. Haas, MBA

    2017-03-01

    Conclusions: The large variation in costs among sites suggests major and multiple opportunities to transfer knowledge about process and productivity improvements that lower costs while simultaneously maintaining or improving outcomes.

  7. Dynamically constrained ensemble perturbations – application to tides on the West Florida Shelf

    Directory of Open Access Journals (Sweden)

    F. Lenartz

    2009-07-01

    Full Text Available A method is presented to create an ensemble of perturbations that satisfies linear dynamical constraints. A cost function is formulated defining the probability of each perturbation. It is shown that the perturbations created with this approach take the land-sea mask into account in a similar way as variational analysis techniques. The impact of the land-sea mask is illustrated with an idealized configuration of a barrier island. Perturbations with a spatially variable correlation length can be also created by this approach. The method is applied to a realistic configuration of the West Florida Shelf to create perturbations of the M2 tidal parameters for elevation and depth-averaged currents. The perturbations are weakly constrained to satisfy the linear shallow-water equations. Despite that the constraint is derived from an idealized assumption, it is shown that this approach is applicable to a non-linear and baroclinic model. The amplitude of spurious transient motions created by constrained perturbations of initial and boundary conditions is significantly lower compared to perturbing the variables independently or to using only the momentum equation to compute the velocity perturbations from the elevation.

  8. Cost Minimization Model of Gas Transmission Line for Indonesian SIJ Pipeline Network

    Directory of Open Access Journals (Sweden)

    Septoratno Siregar

    2003-05-01

    Full Text Available Optimization of Indonesian SIJ gas pipeline network is being discussed here. Optimum pipe diameters together with the corresponding pressure distribution are obtained from minimization of total cost function consisting of investment and operating costs and subjects to some physical (Panhandle A and Panhandle B equations constraints. Iteration technique based on Generalized Steepest-Descent and fourth order Runge-Kutta method are used here. The resulting diameters from this continuous optimization are then rounded to the closest available discrete sizes. We have also calculated toll fee along each segment and safety factor of the network by determining the pipe wall thickness, using ANSI B31.8 standard. Sensitivity analysis of toll fee for variation of flow rates is shown here. The result will gives the diameter and compressor size and compressor location that feasible to use for the SIJ pipeline project. The Result also indicates that the east route cost relatively less expensive than the west cost.

  9. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    Science.gov (United States)

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  10. From Cavendish to PLANCK: Constraining Newton's gravitational constant with CMB temperature and polarization anisotropy

    International Nuclear Information System (INIS)

    Galli, Silvia; Melchiorri, Alessandro; Smoot, George F.; Zahn, Oliver

    2009-01-01

    We present new constraints on cosmic variations of Newton's gravitational constant by making use of the latest CMB data from WMAP, BOOMERANG, CBI and ACBAR experiments and independent constraints coming from big bang nucleosynthesis. We found that current CMB data provide constraints at the ∼10% level, that can be improved to ∼3% by including big bang nucleosynthesis data. We show that future data expected from the Planck satellite could constrain G at the ∼1.5% level while an ultimate, cosmic variance limited, CMB experiment could reach a precision of about 0.4%, competitive with current laboratory measurements.

  11. Specialized minimal PDFs for optimized LHC calculations

    CERN Document Server

    Carrazza, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-04-15

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically on their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top quark pair, and electroweak gauge boson physics, and determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4 and 11 Hessian eigenvectors respectively are enough to fully describe the corresp...

  12. Cosmic chronometers: constraining the equation of state of dark energy. I: H(z) measurements

    International Nuclear Information System (INIS)

    Stern, Daniel; Jimenez, Raul; Verde, Licia; Kamionkowski, Marc; Stanford, S. Adam

    2010-01-01

    We present new determinations of the cosmic expansion history from red-envelope galaxies. We have obtained for this purpose high-quality spectra with the Keck-LRIS spectrograph of red-envelope galaxies in 24 galaxy clusters in the redshift range 0.2 −1 Mpc −1 at z ≅ 0.5 and H(z) = 90±40 km sec −1 Mpc −1 at z ≅ 0.9. We discuss the uncertainty in the expansion history determination that arises from uncertainties in the synthetic stellar-population models. We then use these new measurements in concert with cosmic-microwave-background (CMB) measurements to constrain cosmological parameters, with a special emphasis on dark-energy parameters and constraints to the curvature. In particular, we demonstrate the usefulness of direct H(z) measurements by constraining the dark-energy equation of state parameterized by w 0 and w a and allowing for arbitrary curvature. Further, we also constrain, using only CMB and H(z) data, the number of relativistic degrees of freedom to be 4±0.5 and their total mass to be < 0.2 eV, both at 1σ

  13. Comparative study of image restoration techniques in forensic image processing

    Science.gov (United States)

    Bijhold, Jurrien; Kuijper, Arjan; Westhuis, Jaap-Harm

    1997-02-01

    In this work we investigated the forensic applicability of some state-of-the-art image restoration techniques for digitized video-images and photographs: classical Wiener filtering, constrained maximum entropy, and some variants of constrained minimum total variation. Basic concepts and experimental results are discussed. Because all methods appeared to produce different results, a discussion is given of which method is the most suitable, depending on the image objects that are questioned, prior knowledge and type of blur and noise. Constrained minimum total variation methods produced the best results for test images with simulated noise and blur. In cases where images are the most substantial part of the evidence, constrained maximum entropy might be more suitable, because its theoretical basis predicts a restoration result that shows the most likely pixel values, given all the prior knowledge used during restoration.

  14. An in vitro analysis of medial structures and a medial soft tissue reconstruction in a constrained condylar total knee arthroplasty.

    Science.gov (United States)

    Athwal, Kiron K; El Daou, Hadi; Inderhaug, Eivind; Manning, William; Davies, Andrew J; Deehan, David J; Amis, Andrew A

    2017-08-01

    The aim of this study was to quantify the medial soft tissue contributions to stability following constrained condylar (CC) total knee arthroplasty (TKA) and determine whether a medial reconstruction could restore stability to a soft tissue-deficient, CC-TKA knee. Eight cadaveric knees were mounted in a robotic system and tested at 0°, 30°, 60°, and 90° of flexion with ±50 N anterior-posterior force, ±8 Nm varus-valgus, and ±5 Nm internal-external torque. The deep and superficial medial collateral ligaments (dMCL, sMCL) and posteromedial capsule (PMC) were transected and their relative contributions to stabilising the applied loads were quantified. After complete medial soft tissue transection, a reconstruction using a semitendinosus tendon graft was performed, and the effect on kinematic behaviour under equivocal conditions was measured. In the CC-TKA knee, the sMCL was the major medial restraint in anterior drawer, internal-external, and valgus rotation. No significant differences were found between the rotational laxities of the reconstructed knee to the pre-deficient state for the arc of motion examined. The relative contribution of the reconstruction was higher in valgus rotation at 60° than the sMCL; otherwise, the contribution of the reconstruction was similar to that of the sMCL. There is contention whether a CC-TKA can function with medial deficiency or more constraint is required. This work has shown that a CC-TKA may not provide enough stability with an absent sMCL. However, in such cases, combining the CC-TKA with a medial soft tissue reconstruction may be considered as an alternative to a hinged implant.

  15. Consequences of "Minimal" Group Affiliations in Children

    Science.gov (United States)

    Dunham, Yarrow; Baron, Andrew Scott; Carey, Susan

    2011-01-01

    Three experiments (total N = 140) tested the hypothesis that 5-year-old children's membership in randomly assigned "minimal" groups would be sufficient to induce intergroup bias. Children were randomly assigned to groups and engaged in tasks involving judgments of unfamiliar in-group or out-group children. Despite an absence of information…

  16. Variations in the small-scale galactic magnetic field and short time-scale intensity variations of extragalactic radio sources

    International Nuclear Information System (INIS)

    Simonetti, J.H.

    1985-01-01

    Structure functions of the Faraday rotation measures (RMs) of extragalactic radio sources are used to investigate variations in the interstellar magnetic field on length scales of approx.0.01 to 100 pc. Model structure functions derived assuming a power-law power spectrum of irregularities in n/sub e/B, are compared with those observed. The results indicate an outer angular scale for RM variations of approximately less than or equal to 5 0 and evidence for RM variations on scales as small as 1'. Differences in the variance of n/sub e/B fluctuations for various lines of sight through the Galaxy are found. Comparison of pulsar scintillations in right- and left-circular polarizations yield an upper limit to the variations in n/sub e/ on a length scale of approx.10 11 cm. RMs were determined through high-velocity molecular flows in galactic star-formation regions, with the goal of constraining magnetic fields in and near the flows. RMs of 7 extragalactic sources with a approx.20 arcmin wide area seen through Cep A, fall in two groups separated by approx.150 rad m -2 - large given our knowledge of RM variations on small angular scales and possibly a result of the anisotropy of the high-velocity material

  17. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  18. On well-posedness of variational models of charged drops.

    Science.gov (United States)

    Muratov, Cyrill B; Novaga, Matteo

    2016-03-01

    Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges.

  19. A Brief Study of Variation Theory in Quality Management

    Directory of Open Access Journals (Sweden)

    Mostafa Farah Bakhsh

    2016-06-01

    Full Text Available Variation is part of everyday life and exists all the time. Variation is the product of differences. Difference in nature of processes results in different products during the time. Proper diagnosis of variation patterns in minimizing the loss is necessary. Continuous quality improvement is regarded as successive reduction of performance variation for delivering high quality products to the customers. In Deming viewpoint, quality deviation is classified to two groups of common and special causes. Variation is not a new word, but understanding and concerns about it are modern. First step in performance variation management is acceptance and belief of variation. For proper management of variations, appropriate tools should be used for detection and display of them. Control are useful tools in recognition, analysis and removing process performance variations.

  20. How will greenhouse gas emissions from motor vehicles be constrained in China around 2030?

    International Nuclear Information System (INIS)

    Zheng, Bo; Zhang, Qiang; Borken-Kleefeld, Jens; Huo, Hong; Guan, Dabo; Klimont, Zbigniew; Peters, Glen P.; He, Kebin

    2015-01-01

    Highlights: • We build a projection model to predict vehicular GHG emissions on provincial basis. • Fuel efficiency gains cannot constrain vehicle GHGs in major southern provinces. • We propose an integrated policy set through sensitivity analysis of policy options. • The policy set will peak GHG emissions of 90% provinces and whole China by 2030. - Abstract: Increasing emissions from road transportation endanger China’s objective to reduce national greenhouse gas (GHG) emissions. The unconstrained growth of vehicle GHG emissions are mainly caused by the insufficient improvement of energy efficiency (kilometers traveled per unit energy use) under current policies, which cannot offset the explosion of vehicle activity in China, especially the major southern provinces. More stringent polices are required to decline GHG emissions in these provinces, and thereby help to constrain national total emissions. In this work, we make a provincial-level projection for vehicle growth, energy demand and GHG emissions to evaluate vehicle GHG emission trends under various policy options in China and determine the way to constrain national emissions. Through sensitivity analysis of various single policies, we propose an integrated policy set to assure the objective of peak national vehicle GHG emissions be achieved around 2030. The integrated policy involves decreasing the use of urban light-duty vehicles by 25%, improving fuel economy by 25% by 2035 comparing 2020, and promoting electric vehicles and biofuels. The stringent new policies would allow China to constrain GHG emissions from road transport sector around 2030. This work provides a perspective to understand vehicle GHG emission growth patterns in China’s provinces, and proposes a strong policy combination to constrain national GHG emissions, which can support the achievement of peak GHG emissions by 2030 promised by the Chinese government