WorldWideScience

Sample records for constrained total-variation minimization

  1. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization

    International Nuclear Information System (INIS)

    Sidky, Emil Y; Pan Xiaochuan

    2008-01-01

    An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories

  2. Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    Purpose: The authors develop and investigate iterative image reconstruction algorithms based on data-discrepancy minimization with a total-variation (TV) constraint. The various algorithms are derived with different data-discrepancy measures reflecting the maximum likelihood (ML) principle......: An incremental algorithm framework is developed for this purpose. The instances of the incremental algorithms are derived for solving optimization problems including a data fidelity objective function combined with a constraint on the image TV. For the data fidelity term the authors, compare application....... Simulations demonstrate the iterative algorithms and the resulting image statistical properties for low-dose CT data acquired with sparse projection view angle sampling. Of particular interest is to quantify improvement of image statistical properties by use of the ML data fidelity term. Methods...

  3. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane

    2010-01-01

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation

  4. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. Iterative CT reconstruction via minimizing adaptively reweighted total variation.

    Science.gov (United States)

    Zhu, Lei; Niu, Tianye; Petrongolo, Michael

    2014-01-01

    Iterative reconstruction via total variation (TV) minimization has demonstrated great successes in accurate CT imaging from under-sampled projections. When projections are further reduced, over-smoothing artifacts appear in the current reconstruction especially around the structure boundaries. We propose a practical algorithm to improve TV-minimization based CT reconstruction on very few projection data. Based on the theory of compressed sensing, the L-0 norm approach is more desirable to further reduce the projection views. To overcome the computational difficulty of the non-convex optimization of the L-0 norm, we implement an adaptive weighting scheme to approximate the solution via a series of TV minimizations for practical use in CT reconstruction. The weight on TV is initialized as uniform ones, and is automatically changed based on the gradient of the reconstructed image from the previous iteration. The iteration stops when a small difference between the weighted TV values is observed on two consecutive reconstructed images. We evaluate the proposed algorithm on both a digital phantom and a physical phantom. Using 20 equiangular projections, our method reduces reconstruction errors in the conventional TV minimization by a factor of more than 5, with improved spatial resolution. By adaptively reweighting TV in iterative CT reconstruction, we successfully further reduce the projection number for the same or better image quality.

  7. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo; Schö nlieb, Carola-Bibiane

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a

  8. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  10. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  11. The numerical solution of total variation minimization problems in image processing

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R.; Oman, M.E. [Montana State Univ., Bozeman, MT (United States)

    1994-12-31

    Consider the minimization of penalized least squares functionals of the form: f(u) = 1/2 ({parallel}Au {minus} z{parallel}){sup 2} + {alpha}{integral}{sub {Omega}}{vert_bar}{del}u{vert_bar}dx. Here A is a bounded linear operator, z represents data, {parallel} {center_dot} {parallel} is a Hilbert space norm, {alpha} is a positive parameter, {integral}{sub {Omega}}{vert_bar}{del}u{vert_bar} dx represents the total variation (TV) of a function u {element_of} BV ({Omega}), the class of functions of bounded variation on a bounded region {Omega}, and {vert_bar} {center_dot} {vert_bar} denotes Euclidean norm. In image processing, u represents an image which is to be recovered from noisy data z. Certain {open_quotes}blurring processes{close_quotes} may be represented by the action of an operator A on the image u.

  12. Fourier-based reconstruction via alternating direction total variation minimization in linear scan CT

    International Nuclear Information System (INIS)

    Cai, Ailong; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2015-01-01

    In this study, we consider a novel form of computed tomography (CT), that is, linear scan CT (LCT), which applies a straight line trajectory. Furthermore, an iterative algorithm is proposed for pseudo-polar Fourier reconstruction through total variation minimization (PPF-TVM). Considering that the sampled Fourier data are distributed in pseudo-polar coordinates, the reconstruction model minimizes the TV of the image subject to the constraint that the estimated 2D Fourier data for the image are consistent with the 1D Fourier transform of the projection data. PPF-TVM employs the alternating direction method (ADM) to develop a robust and efficient iteration scheme, which ensures stable convergence provided that appropriate parameter values are given. In the ADM scheme, PPF-TVM applies the pseudo-polar fast Fourier transform and its adjoint to iterate back and forth between the image and frequency domains. Thus, there is no interpolation in the Fourier domain, which makes the algorithm both fast and accurate. PPF-TVM is particularly useful for limited angle reconstruction in LCT and it appears to be robust against artifacts. The PPF-TVM algorithm was tested with the FORBILD head phantom and real data in comparisons with state-of-the-art algorithms. Simulation studies and real data verification suggest that PPF-TVM can reconstruct higher accuracy images with lower time consumption

  13. Salt-and-pepper noise removal using modified mean filter and total variation minimization

    Science.gov (United States)

    Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.

    2018-01-01

    The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.

  14. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    Science.gov (United States)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  15. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  16. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    Science.gov (United States)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  17. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  18. Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Yongle Li

    2014-01-01

    Full Text Available We propose a new method of image restoration for catadioptric defocus blur using omnitotal variation (Omni-TV minimization based on omnigradient. Catadioptric omnidirectional imaging systems usually consist of conventional cameras and curved mirrors for capturing 360° field of view. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. In an omnidirectional image, two points near each other may not be close to one another in the 3D scene. Traditional gradient computation cannot be directly applied to omnidirectional image processing. Thus, omnigradient computing method combined with the characteristics of catadioptric omnidirectional imaging is proposed. Following this Omni-TV minimization is used as the constraint for deconvolution regularization, leading to the restoration of defocus blur in an omnidirectional image to obtain all sharp omnidirectional images. The proposed method is important for improving catadioptric omnidirectional imaging quality and promoting applications in related fields like omnidirectional video and image processing.

  19. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  20. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.

  1. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  2. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  3. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  4. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  5. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  6. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  7. Beam’s-eye-view dosimetrics (BEVD) guided rotational station parameter optimized radiation therapy (SPORT) planning based on reweighted total-variation minimization

    Science.gov (United States)

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Xing, Lei

    2015-03-01

    Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans.

  8. Beam’s-eye-view dosimetrics (BEVD) guided rotational station parameter optimized radiation therapy (SPORT) planning based on reweighted total-variation minimization

    International Nuclear Information System (INIS)

    Kim, Hojin; Li, Ruijiang; Xing, Lei; Lee, Rena

    2015-01-01

    Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans

  9. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  10. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  11. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  12. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  13. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  14. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  15. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  16. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem

    2014-04-18

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.

  17. Total Variation Based Parameter-Free Model for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Andersen, Martin Skovgaard

    2017-01-01

    We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained...... total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i. e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained...... reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities...

  18. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  19. Total variation-based neutron computed tomography

    Science.gov (United States)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  20. Total Variation Depth for Functional Data

    KAUST Repository

    Huang, Huang

    2016-11-15

    There has been extensive work on data depth-based methods for robust multivariate data analysis. Recent developments have moved to infinite-dimensional objects such as functional data. In this work, we propose a new notion of depth, the total variation depth, for functional data. As a measure of depth, its properties are studied theoretically, and the associated outlier detection performance is investigated through simulations. Compared to magnitude outliers, shape outliers are often masked among the rest of samples and harder to identify. We show that the proposed total variation depth has many desirable features and is well suited for outlier detection. In particular, we propose to decompose the total variation depth into two components that are associated with shape and magnitude outlyingness, respectively. This decomposition allows us to develop an effective procedure for outlier detection and useful visualization tools, while naturally accounting for the correlation in functional data. Finally, the proposed methodology is demonstrated using real datasets of curves, images, and video frames.

  1. Design of a minimally constraining, passively supported gait training exoskeleton: ALEX II.

    Science.gov (United States)

    Winfree, Kyle N; Stegall, Paul; Agrawal, Sunil K

    2011-01-01

    This paper discusses the design of a new, minimally constraining, passively supported gait training exoskeleton known as ALEX II. This device builds on the success and extends the features of the ALEX I device developed at the University of Delaware. Both ALEX (Active Leg EXoskeleton) devices have been designed to supply a controllable torque to a subject's hip and knee joint. The current control strategy makes use of an assist-as-needed algorithm. Following a brief review of previous work motivating this redesign, we discuss the key mechanical features of the new ALEX device. A short investigation was conducted to evaluate the effectiveness of the control strategy and impact of the exoskeleton on the gait of six healthy subjects. This paper concludes with a comparison between the subjects' gait both in and out of the exoskeleton. © 2011 IEEE

  2. Topology Optimization for Minimizing the Resonant Response of Plates with Constrained Layer Damping Treatment

    Directory of Open Access Journals (Sweden)

    Zhanpeng Fang

    2015-01-01

    Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.

  3. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    Science.gov (United States)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  4. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  5. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  6. Stringent tests of constrained Minimal Flavor Violation through ΔF=2 transitions

    International Nuclear Information System (INIS)

    Buras, Andrzej J.; Girrbach, Jennifer

    2013-01-01

    New Physics contributions to ΔF=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S 0 (x t ). With already very precise experimental values of ε K , ΔM d , ΔM s and precise values of the CP-asymmetry S ψK S and of B K entering the evaluation of ε K , the future of CMFV in the ΔF = 2 sector depends crucially on the values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, γ, F B s √(B B s ) and F B d √(B B d ). The ratio ξ of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with ΔM s / ΔM d and S ψK S to determine the range of the angle γ in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke ε K vertical stroke and ΔM d allows to determine the favorite CMFV values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, F B s √(B B s ) and F B d √(B B d ) as functions of S(v) and γ. The vertical stroke V cb vertical stroke 4 dependence of ε K allows to determine vertical stroke V cb vertical stroke for a given S(v) and γ with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V ub vertical stroke, vertical stroke V td vertical stroke and vertical stroke V ts vertical stroke that are automatically determined as functions of S(v) and γ. We derive correlations between F B s √(B B s ) and F B d √(B B d ), vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke and γ that should be tested in the coming years. Typically F B s √(B B s ) and F B d √(B B d ) have to be lower than their present lattice values, while vertical stroke V cb vertical stroke has to

  7. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  8. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges

  9. Stringent tests of constrained Minimal Flavor Violation through {Delta}F=2 transitions

    Energy Technology Data Exchange (ETDEWEB)

    Buras, Andrzej J. [TUM-IAS, Garching (Germany); Girrbach, Jennifer [TUM, Physik Department, Garching (Germany)

    2013-09-15

    New Physics contributions to {Delta}F=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S{sub 0}(x{sub t}). With already very precise experimental values of {epsilon}{sub K}, {Delta}M{sub d}, {Delta}M{sub s} and precise values of the CP-asymmetry S{sub {psi}K{sub S}} and of B{sub K} entering the evaluation of {epsilon}{sub K}, the future of CMFV in the {Delta}F = 2 sector depends crucially on the values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, {gamma}, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}). The ratio {xi} of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with {Delta}M{sub s} / {Delta}M{sub d} and S{sub {psi}K{sub S}} to determine the range of the angle {gamma} in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke {epsilon}{sub K} vertical stroke and {Delta}M{sub d} allows to determine the favorite CMFV values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}) as functions of S(v) and {gamma}. The vertical stroke V{sub cb} vertical stroke {sup 4} dependence of {epsilon}{sub K} allows to determine vertical stroke V{sub cb} vertical stroke for a given S(v) and {gamma} with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V{sub ub} vertical stroke, vertical stroke V{sub td} vertical stroke and vertical stroke V{sub ts} vertical stroke that are automatically determined as functions of S(v) and {gamma}. We derive correlations

  10. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    Science.gov (United States)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  11. Fast magnetic resonance imaging based on high degree total variation

    Science.gov (United States)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  12. Constraining non-minimally coupled tachyon fields by the Noether symmetry

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2009-01-01

    A model for a homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. The Noether symmetry is used to find expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is responsible for the decelerated-accelerated transition period.

  13. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    International Nuclear Information System (INIS)

    Núñez, Darío; Zavala, Jesús; Nellen, Lukas; Sussman, Roberto A; Cabral-Rosetti, Luis G; Mondragón, Myriam

    2008-01-01

    We derive an expression for the entropy of a dark matter halo described using a Navarro–Frenk–White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2σ bounds for the abundance of dark matter: 0.112≤Ω DM h 2 ≤0.122, we are able to clearly identify validity regions among the values of tanβ, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tanβ are not favored; only for tan β ≃ 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m χ ≥141 GeV

  14. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC

    2008-05-15

    We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.

  15. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  16. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    Science.gov (United States)

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  17. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  18. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  19. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    International Nuclear Information System (INIS)

    Choudhury, Sayantan

    2014-01-01

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f NL , τ NL and g NL for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A CMB , using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c s ≤1, and tensor to scalar, 10 −22 ≤r ⋆ ≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f NL ≤8.5, O(75−150)≤τ NL ≤2800 and O(17.4−34.7)≤g NL ≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A CMB ≤0.09

  20. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India)

    2014-04-15

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f{sub NL} , τ{sub NL} and g{sub NL} for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A{sub CMB}, using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c{sub s}≤1, and tensor to scalar, 10{sup −22}≤r{sub ⋆}≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f{sub NL}≤8.5, O(75−150)≤τ{sub NL}≤2800 and O(17.4−34.7)≤g{sub NL}≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A{sub CMB}≤0.09.

  1. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  2. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  3. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  4. PENGHILANGAN NOISE PADA CITRA BERWARNA DENGAN METODE TOTAL VARIATION

    Directory of Open Access Journals (Sweden)

    Anny Yuniarti

    2006-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Saat ini multimedia telah menjadi teknologi yang cukup dominan. Tukar menukar informasi dalam bentuk citra sudah banyak dilakukan oleh masyarakat. Citra dengan kualitas yang baik sangat diperlukan dalam penyajian informasi. Citra yang memiliki noise kurang baik digunakan sebagai sarana informasi, oleh karena itu diperlukan suatu metode untuk memperbaiki kualitas citra. Metode yang digunakan dalam penelitian ini adalah metode total variation untuk penghilangan noise yang dapat diterapkan untuk model warna nonlinier, yaitu Chromaticity-Brightness (CB dan Hue-Saturation-Value (HSV. Filter total variation disebut filter yang bergantung pada data citra karena koefisien filternya diperoleh dari pemrosesan data citra dengan rumusan yang baku. Sehingga filter mask untuk masing-masing piksel memiliki kombinasi koefisien yang berbeda. Metode ini menggunakan proses iterasi untuk menyelesaikan persamaan dasar yang nonlinier. Uji coba dilakukan dengan menggunakan 30 data dengan berbagai jenis noise, yaitu gaussian, salt and pepper dan speckle. Uji coba pembandingan dengan metode filter median dan filter rata-rata. Dari percobaan ini menunjukkan bahwa metode total variation menghasilkan citra yang lebih baik daripada metode

  5. Despeckling Polsar Images Based on Relative Total Variation Model

    Science.gov (United States)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  6. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  7. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan; Strekalovskiy, Evgeny; Koetter, Sabrina; Cremers, Daniel

    2013-01-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  8. Speckle Noise Reduction via Nonconvex High Total Variation Approach

    Directory of Open Access Journals (Sweden)

    Yulian Wu

    2015-01-01

    Full Text Available We address the problem of speckle noise removal. The classical total variation is extensively used in this field to solve such problem, but this method suffers from the staircase-like artifacts and the loss of image details. In order to resolve these problems, a nonconvex total generalized variation (TGV regularization is used to preserve both edges and details of the images. The TGV regularization which is able to remove the staircase effect has strong theoretical guarantee by means of its high order smooth feature. Our method combines the merits of both the TGV method and the nonconvex variational method and avoids their main drawbacks. Furthermore, we develop an efficient algorithm for solving the nonconvex TGV-based optimization problem. We experimentally demonstrate the excellent performance of the technique, both visually and quantitatively.

  9. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-05-01

    Full Text Available Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN, where the observed noisy image f is related to the underlying true image u by f=u+η and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.

  10. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan

    2013-12-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  11. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  12. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  13. Three-dimensional total variation norm for SPECT reconstruction

    International Nuclear Information System (INIS)

    Persson, Mikael; Bone, Dianna; Elmqvist, H.

    2001-01-01

    The total variation (TV) norm has been described in literature as a method for reducing noise in two-dimensional (2D) images. At the same time, the TV-norm is very good at recovering edges in images, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularisation function in Bayesian reconstruction, implemented in an expectation maximisation (EM) algorithm, and called TV-EM. The TV-EM was developed for 2D SPECT imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. We have extended the TV-norm to take into account the third spatial dimension, and developed an iterative EM algorithm based on the three-dimensional (3D) TV-norm, which we call TV3D-EM. This takes into account the correlation between transaxial sections in SPECT, due to system resolution. We have compared the 2D and 3D algorithms using reconstructed images from simulated projection data. Phantoms used were a homogeneous sphere, and a 3D head phantom based on the Shepp-Logan phantom. The TV3D-EM algorithm yielded somewhat lower noise levels than TV-EM. The noise in the TV3D-EM had similar correlation in transaxial and longitudinal sections, which was not the case for TV-EM, or any 2D reconstruction method. In particular, longitudinal sections from TV3D-EM were perceived as less noisy when compared to TV-EM. The use of 3D reconstruction should also be advantageous if compensation for distant dependent collimator blurring is incorporated in the iterative algorithm

  14. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  15. Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

    Directory of Open Access Journals (Sweden)

    Andreas Langer

    2018-01-01

    Full Text Available In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution at all, i.e., the solution is the same whether a box-constraint is used or not. On the contrary, i.e., for applications where a box-constraint may have influence on the solution, we investigate how much it effects the quality of the restoration, especially when the regularization parameter, which weights the importance of the data term and the regularizer, is chosen suitable. In particular, for such applications, we consider the case of a squared L 2 data-fidelity term. For computing a minimizer of the respective box-constrained optimization problems a primal-dual semi-smooth Newton method is presented, which guarantees superlinear convergence.

  16. Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction

    International Nuclear Information System (INIS)

    Hintermüller, Michael; Rautenberg, Carlos N; Hahn, Jooyoung

    2014-01-01

    Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown. (paper)

  17. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    Science.gov (United States)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  18. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  19. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  20. Combined First and Second Order Total Variation Inpainting using Split Bregman

    KAUST Repository

    Papafitsoros, Konstantinos

    2013-07-12

    In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schdönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting.

  1. Combined First and Second Order Total Variation Inpainting using Split Bregman

    KAUST Repository

    Papafitsoros, Konstantinos; Schoenlieb, Carola Bibiane; Sengul, Bati

    2013-01-01

    In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schdönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting.

  2. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  3. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  4. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  5. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  6. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Chen, Dali; Chen, YangQuan; Xue, Dingyu

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  7. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  8. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  9. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  10. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  11. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    Science.gov (United States)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  12. Image Restoration Based on the Hybrid Total-Variation-Type Model

    OpenAIRE

    Shi, Baoli; Pang, Zhi-Feng; Yang, Yu-Fei

    2012-01-01

    We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two ${L}^{1}$ -norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM) to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method call...

  13. Decision-Based Marginal Total Variation Diffusion for Impulsive Noise Removal in Color Images

    Directory of Open Access Journals (Sweden)

    Hongyao Deng

    2017-01-01

    Full Text Available Impulsive noise removal for color images usually employs vector median filter, switching median filter, the total variation L1 method, and variants. These approaches, however, often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A marginal method to reduce impulsive noise is proposed in this paper that overcomes this limitation that is based on the following facts: (i each channel in a color image is contaminated independently, and contaminative components are independent and identically distributed; (ii in a natural image the gradients of different components of a pixel are similar to one another. This method divides components into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the components are divided into the corrupted and the noise-free components; if the image is corrupted by random-valued impulses, the components are divided into the corrupted, noise-free, and the possibly corrupted components. Components falling into different categories are processed differently. If a component is corrupted, modified total variation diffusion is applied; if it is possibly corrupted, scaled total variation diffusion is applied; otherwise, the component is left unchanged. Simulation results demonstrate its effectiveness.

  14. A Primal-Dual Approach for a Total Variation Wasserstein Flow

    KAUST Repository

    Benning, Martin; Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane

    2013-01-01

    We consider a nonlinear fourth-order diffusion equation that arises in denoising of image densities. We propose an implicit time-stepping scheme that employs a primal-dual method for computing the subgradient of the total variation seminorm. The constraint on the dual variable is relaxed by adding a penalty term, depending on a parameter that determines the weight of the penalisation. The paper is furnished with some numerical examples showing the denoising properties of the model considered. © 2013 Springer-Verlag.

  15. Adaptive Second-Order Total Variation: An Approach Aware of Slope Discontinuities

    KAUST Repository

    Lenzen, Frank; Becker, Florian; Lellmann, Jan

    2013-01-01

    Total variation (TV) regularization, originally introduced by Rudin, Osher and Fatemi in the context of image denoising, has become widely used in the field of inverse problems. Two major directions of modifications of the original approach were proposed later on. The first concerns adaptive variants of TV regularization, the second focuses on higher-order TV models. In the present paper, we combine the ideas of both directions by proposing adaptive second-order TV models, including one anisotropic model. Experiments demonstrate that introducing adaptivity results in an improvement of the reconstruction error. © 2013 Springer-Verlag.

  16. A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford-Shah Image Segmentation

    Science.gov (United States)

    2016-05-01

    norm does not cap - ture the geometry completely. The L1−L2 in (c) does a better job than TV while L1 in (b) and L1−0.5L2 in (d) capture the squares most...and isotropic total variation (TV) norms into a relaxed formu- lation of the two phase Mumford-Shah (MS) model for image segmentation. We show...results exceeding those obtained by the MS model when using the standard TV norm to regular- ize partition boundaries. In particular, examples illustrating

  17. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization

    DEFF Research Database (Denmark)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-01-01

    scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...

  18. Accelerating cross-validation with total variation and its application to super-resolution imaging.

    Directory of Open Access Journals (Sweden)

    Tomoyuki Obuchi

    Full Text Available We develop an approximation formula for the cross-validation error (CVE of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  19. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  20. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  1. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  2. Image Restoration Based on the Hybrid Total-Variation-Type Model

    Directory of Open Access Journals (Sweden)

    Baoli Shi

    2012-01-01

    Full Text Available We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two L1-norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method called the proximal point method (PPM is proposed and the convergence of the proposed method is proved. Some numerical results demonstrate the viability and efficiency of the proposed model and methods.

  3. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  4. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  5. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  6. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  7. Blind image fusion for hyperspectral imaging with the directional total variation

    Science.gov (United States)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  8. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  9. Robust bladder image registration by redefining data-term in total variational approach

    Science.gov (United States)

    Ali, Sharib; Daul, Christian; Galbrun, Ernest; Amouroux, Marine; Guillemin, François; Blondel, Walter

    2015-03-01

    Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.

  10. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  11. A function space framework for structural total variation regularization with applications in inverse problems

    Science.gov (United States)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  12. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    Science.gov (United States)

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  14. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  15. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  16. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study

    Science.gov (United States)

    Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei

    2018-04-01

    Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction

  17. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  18. Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise in 3D Reconstruction from Optical Projection Tomography

    Czech Academy of Sciences Publication Activity Database

    Michálek, Jan

    2015-01-01

    Roč. 21, č. 6 (2015), s. 1602-1615 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) LH13028; GA ČR(CZ) GA13-12412S Institutional support: RVO:67985823 Keywords : optical projection tomography * microscopy * artifacts * total variation * data mismatch Subject RIV: EA - Cell Biology Impact factor: 1.730, year: 2015

  19. Improved total variation-based CT image reconstruction applied to clinical data

    Energy Technology Data Exchange (ETDEWEB)

    Ritschl, Ludwig; Bergner, Frank; Kachelriess, Marc [Institute of Medical Physics (IMP), University of Erlangen-Nuernberg, Henkestr. 91, 91052 Erlangen (Germany); Fleischmann, Christof, E-mail: ludwig.ritschl@imp.uni-erlangen.de [Ziehm Imaging GmbH, Donaustrasse 31, 90451 Nuernberg (Germany)

    2011-03-21

    In computed tomography there are different situations where reconstruction has to be performed with limited raw data. In the past few years it has been shown that algorithms which are based on compressed sensing theory are able to handle incomplete datasets quite well. As a cost function these algorithms use the l{sub 1}-norm of the image after it has been transformed by a sparsifying transformation. This yields to an inequality-constrained convex optimization problem. Due to the large size of the optimization problem some heuristic optimization algorithms have been proposed in the past few years. The most popular way is optimizing the raw data and sparsity cost functions separately in an alternating manner. In this paper we will follow this strategy and present a new method to adapt these optimization steps. Compared to existing methods which perform similarly, the proposed method needs no a priori knowledge about the raw data consistency. It is ensured that the algorithm converges to the lowest possible value of the raw data cost function, while holding the sparsity constraint at a low value. This is achieved by transferring the step-size determination of both optimization procedures into the raw data domain, where they are adapted to each other. To evaluate the algorithm, we process measured clinical datasets. To cover a wide field of possible applications, we focus on the problems of angular undersampling, data lost due to metal implants, limited view angle tomography and interior tomography. In all cases the presented method reaches convergence within less than 25 iteration steps, while using a constant set of algorithm control parameters. The image artifacts caused by incomplete raw data are mostly removed without introducing new effects like staircasing. All scenarios are compared to an existing implementation of the ASD-POCS algorithm, which realizes the step-size adaption in a different way. Additional prior information as proposed by the PICCS algorithm

  20. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    Science.gov (United States)

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  1. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  2. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  3. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  4. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  5. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  6. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG.

    Science.gov (United States)

    Lee, Kwang Jin; Lee, Boreom

    2016-07-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.

  7. Low-dose 4D cone-beam CT via joint spatiotemporal regularization of tensor framelet and nonlocal total variation

    Science.gov (United States)

    Han, Hao; Gao, Hao; Xing, Lei

    2017-08-01

    Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.

  8. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  9. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  10. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  11. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation

    Directory of Open Access Journals (Sweden)

    Gang Wang

    2018-05-01

    Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  12. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    Science.gov (United States)

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  13. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  14. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  15. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  16. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  17. Sparseness- and continuity-constrained seismic imaging

    Science.gov (United States)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  18. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  19. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  20. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  1. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  2. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  3. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  4. Minimal modification to tribimaximal mixing

    International Nuclear Information System (INIS)

    He Xiaogang; Zee, A.

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

  5. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  6. Minimal mirror twin Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

  7. WE-EF-207-08: Improve Cone Beam CT Using a Synchronized Moving Grid, An Inter-Projection Sensor Fusion and a Probability Total Variation Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H; Kong, V; Jin, J [Georgia Regents University Cancer Center, Augusta, GA (Georgia); Ren, L; Zhang, Y; Giles, W [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To present a cone beam computed tomography (CBCT) system, which uses a synchronized moving grid (SMOG) to reduce and correct scatter, an inter-projection sensor fusion (IPSF) algorithm to estimate the missing information blocked by the grid, and a probability total variation (pTV) algorithm to reconstruct the CBCT image. Methods: A prototype SMOG-equipped CBCT system was developed, and was used to acquire gridded projections with complimentary grid patterns in two neighboring projections. Scatter was reduced by the grid, and the remaining scatter was corrected by measuring it under the grid. An IPSF algorithm was used to estimate the missing information in a projection from data in its 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was used to reconstruct the initial CBCT image using projections after IPSF processing for pTV. A probability map was generated depending on the confidence of estimation in IPSF for the regions of missing data and penumbra. pTV was finally used to reconstruct the CBCT image for a Catphan, and was compared to conventional CBCT image without using SMOG, images without using IPSF (SMOG + FDK and SMOG + mask-TV), and image without using pTV (SMOG + IPSF + FDK). Results: The conventional CBCT without using SMOG shows apparent scatter-induced cup artifacts. The approaches with SMOG but without IPSF show severe (SMOG + FDK) or additional (SMOG + TV) artifacts, possibly due to using projections of missing data. The 2 approaches with SMOG + IPSF removes the cup artifacts, and the pTV approach is superior than the FDK by substantially reducing the noise. Using the SMOG also reduces half of the imaging dose. Conclusion: The proposed technique is promising in improving CBCT image quality while reducing imaging dose.

  8. WE-EF-207-08: Improve Cone Beam CT Using a Synchronized Moving Grid, An Inter-Projection Sensor Fusion and a Probability Total Variation Reconstruction

    International Nuclear Information System (INIS)

    Zhang, H; Kong, V; Jin, J; Ren, L; Zhang, Y; Giles, W

    2015-01-01

    Purpose: To present a cone beam computed tomography (CBCT) system, which uses a synchronized moving grid (SMOG) to reduce and correct scatter, an inter-projection sensor fusion (IPSF) algorithm to estimate the missing information blocked by the grid, and a probability total variation (pTV) algorithm to reconstruct the CBCT image. Methods: A prototype SMOG-equipped CBCT system was developed, and was used to acquire gridded projections with complimentary grid patterns in two neighboring projections. Scatter was reduced by the grid, and the remaining scatter was corrected by measuring it under the grid. An IPSF algorithm was used to estimate the missing information in a projection from data in its 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was used to reconstruct the initial CBCT image using projections after IPSF processing for pTV. A probability map was generated depending on the confidence of estimation in IPSF for the regions of missing data and penumbra. pTV was finally used to reconstruct the CBCT image for a Catphan, and was compared to conventional CBCT image without using SMOG, images without using IPSF (SMOG + FDK and SMOG + mask-TV), and image without using pTV (SMOG + IPSF + FDK). Results: The conventional CBCT without using SMOG shows apparent scatter-induced cup artifacts. The approaches with SMOG but without IPSF show severe (SMOG + FDK) or additional (SMOG + TV) artifacts, possibly due to using projections of missing data. The 2 approaches with SMOG + IPSF removes the cup artifacts, and the pTV approach is superior than the FDK by substantially reducing the noise. Using the SMOG also reduces half of the imaging dose. Conclusion: The proposed technique is promising in improving CBCT image quality while reducing imaging dose

  9. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  10. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  11. Early cosmology constrained

    Energy Technology Data Exchange (ETDEWEB)

    Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.

  12. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  13. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  14. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  15. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  16. Constraining the dark side with observations

    International Nuclear Information System (INIS)

    Diez-Tejedor, Alberto

    2007-01-01

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations

  17. Constraining the dark side with observations

    Energy Technology Data Exchange (ETDEWEB)

    Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)

    2007-05-15

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.

  18. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  19. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  20. Multivariable controller for discrete stochastic amplitude-constrained systems

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1983-04-01

    Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.

  1. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  2. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  3. Constrained supersymmetric flipped SU(5) GUT phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [CERN, TH Division, PH Department, Geneva 23 (Switzerland); King' s College London, Theoretical Physics and Cosmology Group, Department of Physics, London (United Kingdom); Mustafayev, Azar [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Olive, Keith A. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Stanford University, Department of Physics and SLAC, Palo Alto, CA (United States)

    2011-07-15

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M{sub in}, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tau}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2},m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M{sub in}, as we illustrate for several cases with tan {beta}=10 and 55. However, these features do not necessarily disappear at large M{sub in}, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  4. Constrained supersymmetric flipped SU(5) GUT phenomenology

    International Nuclear Information System (INIS)

    Ellis, John; Mustafayev, Azar; Olive, Keith A.

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M in , above the GUT scale, M GUT . We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino χ and the lighter stau τ 1 is sensitive to M in , as is the relationship between m χ and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m 1/2 ,m 0 ) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M in , as we illustrate for several cases with tan β=10 and 55. However, these features do not necessarily disappear at large M in , unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  5. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  6. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  7. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  8. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  9. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  10. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  11. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  12. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  13. Geometric Total Variation for Texture Deformation

    DEFF Research Database (Denmark)

    Bespalov, Dmitriy; Dahl, Anders Lindbjerg; Shokoufandeh, Ali

    2010-01-01

    In this work we propose a novel variational method that we intend to use for estimating non-rigid texture deformation. The method is able to capture variation in grayscale images with respect to the geometry of its features. Our experimental evaluations demonstrate that accounting for geometry...... of features in texture images leads to significant improvements in localization of these features, when textures undergo geometrical transformations. Accurate localization of features in the presense of unkown deformations is a crucial property for texture characterization methods, and we intend to expoit...

  14. Total Variation and Tomographic Imaging from Projections

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Heide

    2011-01-01

    or 3D reconstruction from noisy projections. We demonstrate that for a small signal-to-noise ratio, this new approach allows us to compute better (i.e., more reliable) reconstructions than those obtained by classical methods. This is possible due to the use of the TV reconstruction model, which...

  15. Total Variation Depth for Functional Data

    KAUST Repository

    Huang, Huang; Sun, Ying

    2016-01-01

    that are associated with shape and magnitude outlyingness, respectively. This decomposition allows us to develop an effective procedure for outlier detection and useful visualization tools, while naturally accounting for the correlation in functional data. Finally

  16. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  17. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  18. Minimizing Exposure at Work

    Science.gov (United States)

    ; Environment Human Health Animal Health Safe Use Practices Food Safety Environment Air Water Soil Wildlife Home Page Pesticide Health and Safety Information Safe Use Practices Minimizing Exposure at Work Pesticides - Pennsylvania State University Cooperative Extension Personal Protective Equipment for Working

  19. Minimalism. Clip and Save.

    Science.gov (United States)

    Hubbard, Guy

    2002-01-01

    Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)

  20. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  1. Minimal and careful processing

    OpenAIRE

    Nielsen, Thorkild

    2004-01-01

    In several standards, guidelines and publications, organic food processing is strongly associated with "minimal processing" and "careful processing". The term "minimal processing" is nowadays often used in the general food processing industry and described in literature. The term "careful processing" is used more specifically within organic food processing but is not yet clearly defined. The concept of carefulness seems to fit very well with the processing of organic foods, especially if it i...

  2. Constrained Supersymmetric Flipped SU(5) GUT Phenomenology

    CERN Document Server

    Ellis, John; Olive, Keith A

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, $M_{in}$, above the GUT scale, $M_{GUT}$. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino and the lighter stau is sensitive to $M_{in}$, as is the relationship between the neutralino mass and the masses of the heavier Higgs bosons. For these reasons, prominent features in generic $(m_{1/2}, m_0)$ planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to $M_{in}$, as we illustrate for several cases with tan(beta)...

  3. Scheduling Aircraft Landings under Constrained Position Shifting

    Science.gov (United States)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  4. Should we still believe in constrained supersymmetry?

    International Nuclear Information System (INIS)

    Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin

    2013-01-01

    We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)

  5. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  6. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  7. A Simply Constrained Optimization Reformulation of KKT Systems Arising from Variational Inequalities

    International Nuclear Information System (INIS)

    Facchinei, F.; Fischer, A.; Kanzow, C.; Peng, J.-M.

    1999-01-01

    The Karush-Kuhn-Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties

  8. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  9. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  10. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  11. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  12. Minimal quantization and confinement

    International Nuclear Information System (INIS)

    Ilieva, N.P.; Kalinowskij, Yu.L.; Nguyen Suan Han; Pervushin, V.N.

    1987-01-01

    A ''minimal'' version of the Hamiltonian quantization based on the explicit solution of the Gauss equation and on the gauge-invariance principle is considered. By the example of the one-particle Green function we show that the requirement for gauge invariance leads to relativistic covariance of the theory and to more proper definition of the Faddeev - Popov integral that does not depend on the gauge choice. The ''minimal'' quantization is applied to consider the gauge-ambiguity problem and a new topological mechanism of confinement

  13. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  14. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  15. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  16. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  17. Minimal model holography

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Gopakumar, Rajesh

    2013-01-01

    We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

  18. Hazardous waste minimization

    International Nuclear Information System (INIS)

    Freeman, H.

    1990-01-01

    This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

  19. Minimally invasive distal pancreatectomy

    NARCIS (Netherlands)

    Røsok, Bård I.; de Rooij, Thijs; van Hilst, Jony; Diener, Markus K.; Allen, Peter J.; Vollmer, Charles M.; Kooby, David A.; Shrikhande, Shailesh V.; Asbun, Horacio J.; Barkun, Jeffrey; Besselink, Marc G.; Boggi, Ugo; Conlon, Kevin; Han, Ho Seong; Hansen, Paul; Kendrick, Michael L.; Kooby, David; Montagnini, Andre L.; Palanivelu, Chinnasamy; Wakabayashi, Go; Zeh, Herbert J.

    2017-01-01

    The first International conference on Minimally Invasive Pancreas Resection was arranged in conjunction with the annual meeting of the International Hepato-Pancreato-Biliary Association (IHPBA), in Sao Paulo, Brazil on April 19th 2016. The presented evidence and outcomes resulting from the session

  20. Minimal DBM Substraction

    DEFF Research Database (Denmark)

    David, Alexandre; Håkansson, John; G. Larsen, Kim

    In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

  1. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  2. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  3. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  4. Wavelet library for constrained devices

    Science.gov (United States)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  5. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  6. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  7. The ZOOM minimization package

    International Nuclear Information System (INIS)

    Fischler, Mark S.; Sachs, D.

    2004-01-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete

  8. Minimizing the Pacman effect

    International Nuclear Information System (INIS)

    Ritson, D.; Chou, W.

    1997-10-01

    The Pacman bunches will experience two deleterious effects: tune shift and orbit displacement. It is known that the tune shift can be compensated by arranging crossing planes 900 relative to each other at successive interaction points (lPs). This paper gives an analytical estimate of the Pacman orbit displacement for a single as well as for two crossings. For the latter, it can be minimized by using equal phase advances from one IP to another. In the LHC, this displacement is in any event small and can be neglected

  9. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  10. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  11. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  12. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  13. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  14. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  15. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  16. Constraining Calcium Production in Novae

    Science.gov (United States)

    Tiwari, Pranjal; C. Fry, C. Wrede Team; A. Chen, J. Liang Collaboration; S. Bishop, T. Faestermann, D. Seiler Collaboration; R. Hertenberger, H. Wirth Collaboration

    2017-09-01

    Calcium is an element that can be produced by thermonuclear reactions in the hottest classical novae. There are discrepancies between the abundance of Calcium observed in novae and expectations based on astrophysical models. Unbound states 1 MeV above the proton threshold affect the production of Calcium in nova models because they act as resonances in the 38 K(p , γ) 39 Ca reaction present. This work describes an experiment to measure the energies of the excited states of 39 Ca . We will bombard a thin target of 40 Ca with a beam of 22 MeV deuterons, resulting in tritons and 39Ca. We will use a Q3D magnetic spectrograph from the MLL in Garching, Germany to momenta analyze the tritons to observe the excitation energies of the resulting 39 Ca states. Simulations have been run to determine the optimal spectrograph settings. We decided to use a chemically stable target composed of CaF2 , doing so resulted in an extra contaminant, Fluorine, which is dealt with by measuring the background from a LiF target. These simulations have led to settings and targets that will result in the observation of the 39 Ca states of interest with minimal interference from contaminants. Preliminary results from this experiment will be presented. National Sciences and Engineering Research Council of Canada and U.S. National Science Foundation.

  17. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  18. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  19. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  20. Client's Constraining Factors to Construction Project Management

    African Journals Online (AJOL)

    factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...

  1. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  2. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  3. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  4. Swarm robotics and minimalism

    Science.gov (United States)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  5. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  6. Resource Constrained Planning of Multiple Projects with Separable Activities

    Science.gov (United States)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  7. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells.

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-08-09

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs.

  8. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-01-01

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs. PMID:28773795

  9. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  10. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  11. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

    Directory of Open Access Journals (Sweden)

    David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

    2018-03-01

    Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

  12. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  13. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  14. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  15. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  16. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  17. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  18. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  19. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  20. Client's constraining factors to construction project management ...

    African Journals Online (AJOL)

    This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...

  1. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  2. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  3. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  4. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  5. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...

  6. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  7. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  8. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  9. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....

  10. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  11. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  12. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  13. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  14. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  15. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  16. Security-Constrained Unit Commitment in AC Microgrids Considering Stochastic Price-Based Demand Response and Renewable Generation

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    In this paper, a stochastic model for scheduling of AC security‐constrained unit commitment associated with demand response (DR) actions is developed in an islanded residential microgrid. The proposed model maximizes the expected profit of microgrid operator and minimizes the total customers...

  17. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  18. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  19. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Thuot, J.R.; Moos, L.

    1996-01-01

    Environmental restoration, decontamination and decommissioning, and facility dismantlement projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized; however, there are significant areas where waste and cost can be reduced by careful planning and execution. Waste reduction can occur in three ways: beneficial reuse or recycling, segregation of waste types, and reducing generation of secondary waste

  20. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Moos, L.; Thuot, J.R.

    1996-01-01

    Environmental restoration, decontamination and decommissioning and facility dismantelment projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized. In fact, however, there are three significant areas where waste and cost can be reduced. Waste reduction can occur in three ways: beneficial reuse or recycling; segregation of waste types; and reducing generation of secondary waste. This paper will discuss several examples of reuse, recycle, segregation, and secondary waste reduction at ANL restoration programs

  1. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  3. Wilson loops in minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS/CFT correspondence suggests that the Wilson loop of the large N gauge theory with N = 4 supersymmetry in 4 dimensions is described by a minimal surface in AdS 5 x S 5 . The authors examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which the authors call BPS loops, whose expectation values are free from ultra-violet divergence. They formulate the loop equation for such loops. To the extent that they have checked, the minimal surface in AdS 5 x S 5 gives a solution of the equation. The authors also discuss the zig-zag symmetry of the loop operator. In the N = 4 gauge theory, they expect the zig-zag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. They will show how this is realized for the minimal surface

  4. Classical strings and minimal surfaces

    International Nuclear Information System (INIS)

    Urbantke, H.

    1986-01-01

    Real Lorentzian forms of some complex or complexified Euclidean minimal surfaces are obtained as an application of H.A. Schwarz' solution to the initial value problem or a search for surfaces admitting a group of Poincare transformations. (Author)

  5. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  6. Wilson loops and minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS-CFT correspondence suggests that the Wilson loop of the large N gauge theory with N=4 supersymmetry in four dimensions is described by a minimal surface in AdS 5 xS 5 . We examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which we call BPS loops, whose expectation values are free from ultraviolet divergence. We formulate the loop equation for such loops. To the extent that we have checked, the minimal surface in AdS 5 xS 5 gives a solution of the equation. We also discuss the zigzag symmetry of the loop operator. In the N=4 gauge theory, we expect the zigzag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. We will show how this is realized for the minimal surface. (c) 1999 The American Physical Society

  7. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  8. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  9. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  10. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  11. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  12. Q-deformed systems and constrained dynamics

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1993-01-01

    It is shown that quantum theories of the q-deformed harmonic oscillator and one-dimensional free q-particle (a free particle on the 'quantum' line) can be obtained by the canonical quantization of classical Hamiltonian systems with commutative phase-space variables and a non-trivial symplectic structure. In the framework of this approach, classical dynamics of a particle on the q-line coincides with the one of a free particle with friction. It is argued that q-deformed systems can be treated as ordinary mechanical systems with the second-class constraints. In particular, second-class constrained systems corresponding to the q-oscillator and q-particle are given. A possibility of formulating q-deformed systems via gauge theories (first-class constrained systems) is briefly discussed. (orig.)

  13. Minimal string theory is logarithmic

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

    2005-01-01

    We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

  14. Annual Waste Minimization Summary Report

    International Nuclear Information System (INIS)

    Haworth, D.M.

    2011-01-01

    This report summarizes the waste minimization efforts undertaken by National Security TechnoIogies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2010. The NNSA/NSO Pollution Prevention Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment.

  15. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  16. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  17. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  18. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  19. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  20. Constraining neutron star matter with Quantum Chromodynamics

    CERN Document Server

    Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi

    2014-01-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.

  1. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  2. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  3. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  4. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  5. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  6. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  7. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  8. Topological gravity with minimal matter

    International Nuclear Information System (INIS)

    Li Keke

    1991-01-01

    Topological minimal matter, obtained by twisting the minimal N = 2 supeconformal field theory, is coupled to two-dimensional topological gravity. The free field formulation of the coupled system allows explicit representations of BRST charge, physical operators and their correlation functions. The contact terms of the physical operators may be evaluated by extending the argument used in a recent solution of topological gravity without matter. The consistency of the contact terms in correlation functions implies recursion relations which coincide with the Virasoro constraints derived from the multi-matrix models. Topological gravity with minimal matter thus provides the field theoretic description for the multi-matrix models of two-dimensional quantum gravity. (orig.)

  9. Minimal Marking: A Success Story

    Directory of Open Access Journals (Sweden)

    Anne McNeilly

    2014-11-01

    Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.

  10. Non-minimal inflation revisited

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Shafizadeh, Somayeh

    2010-01-01

    We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

  11. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  12. Harm minimization among teenage drinkers

    DEFF Research Database (Denmark)

    Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup

    2007-01-01

    AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...

  13. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  14. Incomplete Dirac reduction of constrained Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandre, C., E-mail: chandre@cpt.univ-mrs.fr

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  15. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....

  16. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  17. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  18. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  19. Restoration ecology: two-sex dynamics and cost minimization.

    Directory of Open Access Journals (Sweden)

    Ferenc Molnár

    Full Text Available We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  20. Restoration ecology: two-sex dynamics and cost minimization.

    Science.gov (United States)

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  1. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  2. Constraining the break of spatial diffeomorphism invariance with Planck data

    Science.gov (United States)

    Graef, L. L.; Benetti, M.; Alcaniz, J. S.

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r, and the tensor spectral index, nt. If the inflation consistency relation is imposed, r=-8 nt, we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  3. Constraining the break of spatial diffeomorphism invariance with Planck data

    Energy Technology Data Exchange (ETDEWEB)

    Graef, L.L.; Benetti, M.; Alcaniz, J.S., E-mail: leilagraef@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br [Departamento de Astronomia, Observatório Nacional, R. Gen. José Cristino, 77—São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r , and the tensor spectral index, n {sub t} . If the inflation consistency relation is imposed, r =−8 n {sub t} , we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  4. Bulk diffusion in a kinetically constrained lattice gas

    Science.gov (United States)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  5. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.

  6. Isoperimetric inequalities for minimal graphs

    International Nuclear Information System (INIS)

    Pacelli Bessa, G.; Montenegro, J.F.

    2007-09-01

    Based on Markvorsen and Palmer's work on mean time exit and isoperimetric inequalities we establish slightly better isoperimetric inequalities and mean time exit estimates for minimal graphs in N x R. We also prove isoperimetric inequalities for submanifolds of Hadamard spaces with tamed second fundamental form. (author)

  7. A Defense of Semantic Minimalism

    Science.gov (United States)

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  8. Torsional Rigidity of Minimal Submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, Vicente

    2006-01-01

    We prove explicit upper bounds for the torsional rigidity of extrinsic domains of minimal submanifolds $P^m$ in ambient Riemannian manifolds $N^n$ with a pole $p$. The upper bounds are given in terms of the torsional rigidities of corresponding Schwarz symmetrizations of the domains in warped...

  9. The debate on minimal deterrence

    International Nuclear Information System (INIS)

    Arbatov, A.; Karp, R.C.; Toth, T.

    1993-01-01

    Revitalization of debates on minimal nuclear deterrence at the present time is induced by the end of the Cold War and a number of unilateral and bilateral actions by the great powers to curtail nuclear arms race and reduce nuclear weapons arsenals

  10. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-01-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

  11. Minimizing TLD-DRD differences

    International Nuclear Information System (INIS)

    Riley, D.L.; McCoy, R.A.; Connell, W.D.

    1987-01-01

    When substantial differences exist in exposures recorded by TLD's and DRD's, it is often necessary to perform an exposure investigation to reconcile the difference. In working with several operating plants, the authors have observed a number of causes for these differences. This paper outlines these observations and discusses procedures that can be used to minimize them

  12. Acquiring minimally invasive surgical skills

    NARCIS (Netherlands)

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.

  13. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  14. Fringe instability in constrained soft elastic layers.

    Science.gov (United States)

    Lin, Shaoting; Cohen, Tal; Zhang, Teng; Yuk, Hyunwoo; Abeyaratne, Rohan; Zhao, Xuanhe

    2016-11-04

    Soft elastic layers with top and bottom surfaces adhered to rigid bodies are abundant in biological organisms and engineering applications. As the rigid bodies are pulled apart, the stressed layer can exhibit various modes of mechanical instabilities. In cases where the layer's thickness is much smaller than its length and width, the dominant modes that have been studied are the cavitation, interfacial and fingering instabilities. Here we report a new mode of instability which emerges if the thickness of the constrained elastic layer is comparable to or smaller than its width. In this case, the middle portion along the layer's thickness elongates nearly uniformly while the constrained fringe portions of the layer deform nonuniformly. When the applied stretch reaches a critical value, the exposed free surfaces of the fringe portions begin to undulate periodically without debonding from the rigid bodies, giving the fringe instability. We use experiments, theory and numerical simulations to quantitatively explain the fringe instability and derive scaling laws for its critical stress, critical strain and wavelength. We show that in a force controlled setting the elastic fingering instability is associated with a snap-through buckling that does not exist for the fringe instability. The discovery of the fringe instability will not only advance the understanding of mechanical instabilities in soft materials but also have implications for biological and engineered adhesives and joints.

  15. Resource Constrained Project Scheduling Subject to Due Dates: Preemption Permitted with Penalty

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available Extensive research works have been carried out in resource constrained project scheduling problem. However, scarce researches have studied the problems in which a setup cost must be incurred if activities are preempted. In this research, we investigate the resource constrained project scheduling problem to minimize the total project cost, considering earliness-tardiness and preemption penalties. A mixed integer programming formulation is proposed for the problem. The resulting problem is NP-hard. So, we try to obtain a satisfying solution using simulated annealing (SA algorithm. The efficiency of the proposed algorithm is tested based on 150 randomly produced examples. Statistical comparison in terms of the computational times and objective function indicates that the proposed algorithm is efficient and effective.

  16. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  17. Constrained reaction volume approach for studying chemical kinetics behind reflected shock waves

    KAUST Repository

    Hanson, Ronald K.

    2013-09-01

    We report a constrained-reaction-volume strategy for conducting kinetics experiments behind reflected shock waves, achieved in the present work by staged filling in a shock tube. Using hydrogen-oxygen ignition experiments as an example, we demonstrate that this strategy eliminates the possibility of non-localized (remote) ignition in shock tubes. Furthermore, we show that this same strategy can also effectively eliminate or minimize pressure changes due to combustion heat release, thereby enabling quantitative modeling of the kinetics throughout the combustion event using a simple assumption of specified pressure and enthalpy. We measure temperature and OH radical time-histories during ethylene-oxygen combustion behind reflected shock waves in a constrained reaction volume and verify that the results can be accurately modeled using a detailed mechanism and a specified pressure and enthalpy constraint. © 2013 The Combustion Institute.

  18. Hazardous waste minimization tracking system

    International Nuclear Information System (INIS)

    Railan, R.

    1994-01-01

    Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

  19. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  20. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  1. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  2. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  3. Minimalism and the Pragmatic Frame

    Directory of Open Access Journals (Sweden)

    Ana Falcato

    2016-02-01

    Full Text Available In the debate between literalism and contextualism in semantics, Kent Bach’s project is often taken to stand on the latter side of the divide. In this paper I argue this is a misleading assumption and justify it by contrasting Bach’s assessment of the theoretical eliminability of minimal propositions arguably expressed by well-formed sentences with standard minimalist views, and by further contrasting his account of the division of interpretative processes ascribable to the semantics and pragmatics of a language with a parallel analysis carried out by the most radical opponent to semantic minimalism, i.e., by occasionalism. If my analysis proves right, the sum of its conclusions amounts to a refusal of Bach’s main dichotomies.

  4. Principle of minimal work fluctuations.

    Science.gov (United States)

    Xiao, Gaoyang; Gong, Jiangbin

    2015-08-01

    Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].

  5. Optimizing Processes to Minimize Risk

    Science.gov (United States)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  6. Minimal Length, Measurability and Gravity

    Directory of Open Access Journals (Sweden)

    Alexander Shalyt-Margolin

    2016-03-01

    Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

  7. [Minimally invasive coronary artery surgery].

    Science.gov (United States)

    Zalaquett, R; Howard, M; Irarrázaval, M J; Morán, S; Maturana, G; Becker, P; Medel, J; Sacco, C; Lema, G; Canessa, R; Cruz, F

    1999-01-01

    There is a growing interest to perform a left internal mammary artery (LIMA) graft to the left anterior descending coronary artery (LAD) on a beating heart through a minimally invasive access to the chest cavity. To report the experience with minimally invasive coronary artery surgery. Analysis of 11 patients aged 48 to 79 years old with single vessel disease that, between 1996 and 1997, had a LIMA graft to the LAD performed through a minimally invasive left anterior mediastinotomy, without cardiopulmonary bypass. A 6 to 10 cm left parasternal incision was done. The LIMA to the LAD anastomosis was done after pharmacological heart rate and blood pressure control and a period of ischemic pre conditioning. Graft patency was confirmed intraoperatively by standard Doppler techniques. Patients were followed for a mean of 11.6 months (7-15 months). All patients were extubated in the operating room and transferred out of the intensive care unit on the next morning. Seven patients were discharged on the third postoperative day. Duplex scanning confirmed graft patency in all patients before discharge; in two patients, it was confirmed additionally by arteriography. There was no hospital mortality, no perioperative myocardial infarction and no bleeding problems. After follow up, ten patients were free of angina, in functional class I and pleased with the surgical and cosmetic results. One patient developed atypical angina on the seventh postoperative month and a selective arteriography confirmed stenosis of the anastomosis. A successful angioplasty of the original LAD lesion was carried out. A minimally invasive left anterior mediastinotomy is a good surgical access to perform a successful LIMA to LAD graft without cardiopulmonary bypass, allowing a shorter hospital stay and earlier postoperative recovery. However, a larger experience and a longer follow up is required to define its role in the treatment of coronary artery disease.

  8. Minimal massive 3D gravity

    International Nuclear Information System (INIS)

    Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

  9. Construction schedules slack time minimizing

    Science.gov (United States)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  10. Acquiring minimally invasive surgical skills

    OpenAIRE

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room. This thesis has led to an enlarged insight in the organization of surgical skills training during residency training of surgical medical specialists.

  11. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  12. Constrained mathematics evaluation in probabilistic logic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Arlin Cooper, J

    1998-06-01

    A challenging problem in mathematically processing uncertain operands is that constraints inherent in the problem definition can require computations that are difficult to implement. Examples of possible constraints are that the sum of the probabilities of partitioned possible outcomes must be one, and repeated appearances of the same variable must all have the identical value. The latter, called the 'repeated variable problem', will be addressed in this paper in order to show how interval-based probabilistic evaluation of Boolean logic expressions, such as those describing the outcomes of fault trees and event trees, can be facilitated in a way that can be readily implemented in software. We will illustrate techniques that can be used to transform complex constrained problems into trivial problems in most tree logic expressions, and into tractable problems in most other cases.

  13. Constraining dark sectors with monojets and dijets

    International Nuclear Information System (INIS)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai; McCullough, Matthew

    2015-03-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  14. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  15. Quantum cosmology of classically constrained gravity

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shang Yanwen

    2006-01-01

    In [G. Gabadadze, Y. Shang, hep-th/0506040] we discussed a classically constrained model of gravity. This theory contains known solutions of General Relativity (GR), and admits solutions that are absent in GR. Here we study cosmological implications of some of these new solutions. We show that a spatially-flat de Sitter universe can be created from 'nothing'. This universe has boundaries, and its total energy equals to zero. Although the probability to create such a universe is exponentially suppressed, it favors initial conditions suitable for inflation. Then we discuss a finite-energy solution with a nonzero cosmological constant and zero space-time curvature. There is no tunneling suppression to fluctuate into this state. We show that for a positive cosmological constant this state is unstable-it can rapidly transition to a de Sitter universe providing a new unsuppressed channel for inflation. For a negative cosmological constant the space-time flat solutions is stable.

  16. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both synthetic...

  17. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  18. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang; Yang, Yijun; Pottmann, Helmut; Mitra, Niloy J.

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  19. Constrained vertebrate evolution by pleiotropic genes.

    Science.gov (United States)

    Hu, Haiyang; Uesaka, Masahiro; Guo, Song; Shimai, Kotaro; Lu, Tsai-Ming; Li, Fang; Fujimoto, Satoko; Ishikawa, Masato; Liu, Shiping; Sasagawa, Yohei; Zhang, Guojie; Kuratani, Shigeru; Yu, Jr-Kai; Kusakabe, Takehiro G; Khaitovich, Philipp; Irie, Naoki

    2017-11-01

    Despite morphological diversification of chordates over 550 million years of evolution, their shared basic anatomical pattern (or 'bodyplan') remains conserved by unknown mechanisms. The developmental hourglass model attributes this to phylum-wide conserved, constrained organogenesis stages that pattern the bodyplan (the phylotype hypothesis); however, there has been no quantitative testing of this idea with a phylum-wide comparison of species. Here, based on data from early-to-late embryonic transcriptomes collected from eight chordates, we suggest that the phylotype hypothesis would be better applied to vertebrates than chordates. Furthermore, we found that vertebrates' conserved mid-embryonic developmental programmes are intensively recruited to other developmental processes, and the degree of the recruitment positively correlates with their evolutionary conservation and essentiality for normal development. Thus, we propose that the intensively recruited genetic system during vertebrates' organogenesis period imposed constraints on its diversification through pleiotropic constraints, which ultimately led to the common anatomical pattern observed in vertebrates.

  20. Constraining Lyman continuum escape using Machine Learning

    Science.gov (United States)

    Giri, Sambit K.; Zackrisson, Erik; Binggeli, Christian; Pelckmans, Kristiaan; Cubo, Rubén; Mellema, Garrelt

    2018-05-01

    The James Webb Space Telescope (JWST) will observe the rest-frame ultraviolet/optical spectra of galaxies from the epoch of reionization (EoR) in unprecedented detail. While escaping into the intergalactic medium, hydrogen-ionizing (Lyman continuum; LyC) photons from the galaxies will contribute to the bluer end of the UV slope and make nebular emission lines less prominent. We present a method to constrain leakage of the LyC photons using the spectra of high redshift (z >~ 6) galaxies. We simulate JWST/NIRSpec observations of galaxies at z =6-9 by matching the fluxes of galaxies observed in the Frontier Fields observations of galaxy cluster MACS-J0416. Our method predicts the escape fraction fesc with a mean absolute error Δfesc ~ 0.14. The method also predicts the redshifts of the galaxies with an error .

  1. Statistical mechanics of budget-constrained auctions

    International Nuclear Information System (INIS)

    Altarelli, F; Braunstein, A; Realpe-Gomez, J; Zecchina, R

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise

  2. Constraining Dark Sectors with Monojets and Dijets

    CERN Document Server

    Chala, Mikael; McCullough, Matthew; Nardini, Germano; Schmidt-Hoberg, Kai

    2015-01-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simpli...

  3. Statistical mechanics of budget-constrained auctions

    Science.gov (United States)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  4. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  5. Constraining dark sectors with monojets and dijets

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); McCullough, Matthew [European Organization for Nuclear Research (CERN), Geneva (Switzerland). Theory Div.

    2015-03-15

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  6. Hard exclusive meson production to constrain GPDs

    Energy Technology Data Exchange (ETDEWEB)

    Wolbeek, Johannes ter; Fischer, Horst; Gorzellik, Matthias; Gross, Arne; Joerg, Philipp; Koenigsmann, Kay; Malm, Pasquale; Regali, Christopher; Schmidt, Katharina; Sirtl, Stefan; Szameitat, Tobias [Physikalisches Institut, Albert-Ludwigs-Universitaet Freiburg, Freiburg im Breisgau (Germany); Collaboration: COMPASS Collaboration

    2014-07-01

    The concept of Generalized Parton Distributions (GPDs) combines the two-dimensional spatial information, given by form factors, with the longitudinal momentum information from the PDFs. Thus, GPDs provide a three-dimensional 'tomography' of the nucleon. Furthermore, according to Ji's sum rule, the GPDs H and E enable access to the total angular momenta of quarks, antiquarks and gluons. While H can be approached using electroproduction cross section, hard exclusive meson production off a transversely polarized target can help to constrain the GPD E. At the COMPASS experiment at CERN, two periods of data taking were performed in 2007 and 2010, using a longitudinally polarized 160 GeV/c muon beam and a transversely polarized NH{sub 3} target. This talk introduces the data analysis of the process μ + p → μ' + p' + V, and recent results are presented.

  7. Constraining the ensemble Kalman filter for improved streamflow forecasting

    Science.gov (United States)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also

  8. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  9. Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

    International Nuclear Information System (INIS)

    Bélanger, G.; Hugonie, C.; Pukhov, A.

    2009-01-01

    We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and susy particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC

  10. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  11. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  12. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  13. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  14. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  15. Resource Management in Constrained Dynamic Situations

    Science.gov (United States)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments

  16. Explaining evolution via constrained persistent perfect phylogeny

    Science.gov (United States)

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  17. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-05-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). Now legislation at the federal level is being introduced. Passage will result in new EPA regulations and also DOE orders. At the state level the Hazardous Waste Reduction and Management Review Act of 1989 was signed by the Governor. DHS is currently promulgating regulations to implement the new law. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements

  18. Symmetry breaking for drag minimization

    Science.gov (United States)

    Roper, Marcus; Squires, Todd M.; Brenner, Michael P.

    2005-11-01

    For locomotion at high Reynolds numbers drag minimization favors fore-aft asymmetric slender shapes with blunt noses and sharp trailing edges. On the other hand, in an inertialess fluid the drag experienced by a body is independent of whether it travels forward or backward through the fluid, so there is no advantage to having a single preferred swimming direction. In fact numerically determined minimum drag shapes are known to exhibit almost no fore-aft asymmetry even at moderate Re. We show that asymmetry persists, albeit extremely weakly, down to vanishingly small Re, scaling asymptotically as Re^3. The need to minimize drag to maximize speed for a given propulsive capacity gives one possible mechanism for the increasing asymmetry in the body plans seen in nature, as organisms increase in size and swimming speed from bacteria like E-Coli up to pursuit predator fish such as tuna. If it is the dominant mechanism, then this signature scaling will be observed in the shapes of motile micro-organisms.

  19. Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe

    Science.gov (United States)

    Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.

    2017-12-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.

  20. Depletion mapping and constrained optimization to support managing groundwater extraction

    Science.gov (United States)

    Fienen, Michael N.; Bradbury, Kenneth R.; Kniffin, Maribeth; Barlow, Paul M.

    2018-01-01

    Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping—showing the model-calculated potential impacts that wells have on stream baseflow—can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States—home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices.

  1. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  2. Bound constrained quadratic programming via piecewise

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun; Pinar, M. C.

    1999-01-01

    of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes the algorithm and its implementation including estimation of lambda/sub 1/ , how to get a good starting point for the iteration, and up- and downdating of Cholesky factorization. Results of extensive......We consider the strictly convex quadratic programming problem with bounded variables. A dual problem is derived using Lagrange duality. The dual problem is the minimization of an unconstrained, piecewise quadratic function. It involves a lower bound of lambda/sub 1/ , the smallest eigenvalue...

  3. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  4. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  5. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  6. Technologies for a greenhouse-constrained society

    International Nuclear Information System (INIS)

    Kuliasha, M.A.; Zucker, A.; Ballew, K.J.

    1992-01-01

    This conference explored how three technologies might help society adjust to life in a greenhouse-constrained environment. Technology experts and policy makers from around the world met June 11--13, 1991, in Oak Ridge, Tennessee, to address questions about how energy efficiency, biomass, and nuclear technologies can mitigate the greenhouse effect and to explore energy production and use in countries in various stages of development. The conference was organized by Oak Ridge National Laboratory and sponsored by the US Department of Energy. Energy efficiency biomass, and nuclear energy are potential substitutes for fossil fuels that might help slow or even reverse the global warming changes that may result from mankind's thirst for energy. Many other conferences have questioned whether the greenhouse effect is real and what reductions in greenhouse gas emissions might be necessary to avoid serious ecological consequences; this conference studied how these reductions might actually be achieved. For these conference proceedings, individuals papers are processed separately for the Energy Data Base

  7. Electricity in a Climate-Constrained World

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    After experiencing a historic drop in 2009, electricity generation reached a record high in 2010, confirming the close linkage between economic growth and electricity usage. Unfortunately, CO2 emissions from electricity have also resumed their growth: Electricity remains the single-largest source of CO2 emissions from energy, with 11.7 billion tonnes of CO2 released in 2010. The imperative to 'decarbonise' electricity and improve end-use efficiency remains essential to the global fight against climate change. The IEA’s Electricity in a Climate-Constrained World provides an authoritative resource on progress to date in this area, including statistics related to CO2 and the electricity sector across ten regions of the world (supply, end-use and capacity additions). It also presents topical analyses on the challenge of rapidly curbing CO2 emissions from electricity. Looking at policy instruments, it focuses on emissions trading in China, using energy efficiency to manage electricity supply crises and combining policy instruments for effective CO2 reductions. On regulatory issues, it asks whether deregulation can deliver decarbonisation and assesses the role of state-owned enterprises in emerging economies. And from technology perspectives, it explores the rise of new end-uses, the role of electricity storage, biomass use in Brazil, and the potential of carbon capture and storage for ‘negative emissions’ electricity supply.

  8. When ethics constrains clinical research: trial design of control arms in "greater than minimal risk" pediatric trials.

    Science.gov (United States)

    de Melo-Martín, Inmaculada; Sondhi, Dolan; Crystal, Ronald G

    2011-09-01

    For more than three decades clinical research in the United States has been explicitly guided by the idea that ethical considerations must be central to research design and practice. In spite of the centrality of this idea, attempting to balance the sometimes conflicting values of advancing scientific knowledge and protecting human subjects continues to pose challenges. Possible conflicts between the standards of scientific research and those of ethics are particularly salient in relation to trial design. Specifically, the choice of a control arm is an aspect of trial design in which ethical and scientific issues are deeply entwined. Although ethical quandaries related to the choice of control arms may arise when conducting any type of clinical trials, they are conspicuous in early phase gene transfer trials that involve highly novel approaches and surgical procedures and have children as the research subjects. Because of children's and their parents' vulnerabilities, in trials that investigate therapies for fatal, rare diseases affecting minors, the scientific and ethical concerns related to choosing appropriate controls are particularly significant. In this paper we use direct gene transfer to the central nervous system to treat late infantile neuronal ceroid lipofuscinosis to illustrate some of these ethical issues and explore possible solutions to real and apparent conflicts between scientific and ethical considerations.

  9. Laparoscopic colonic resection in inflammatory bowel disease: minimal surgery, minimal access and minimal hospital stay.

    LENUS (Irish Health Repository)

    Boyle, E

    2008-11-01

    Laparoscopic surgery for inflammatory bowel disease (IBD) is technically demanding but can offer improved short-term outcomes. The introduction of minimally invasive surgery (MIS) as the default operative approach for IBD, however, may have inherent learning curve-associated disadvantages. We hypothesise that the establishment of MIS as the standard operative approach does not increase patient morbidity as assessed in the initial period of its introduction into a specialised unit, and that it confers earlier postoperative gastrointestinal recovery and reduced hospitalisation compared with conventional open resection.

  10. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  11. Effective theory of flavor for Minimal Mirror Twin Higgs

    Science.gov (United States)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  12. Minimally invasive aortic valve replacement

    DEFF Research Database (Denmark)

    Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K

    2009-01-01

    In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy...... operations were completed as mini-sternotomies, 4 died later of noncardiac causes. The aortic cross-clamp and perfusion times were significantly different across all groups (P replacement...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...

  13. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  14. Strategies to Minimize Antibiotic Resistance

    Directory of Open Access Journals (Sweden)

    Sang Hee Lee

    2013-09-01

    Full Text Available Antibiotic resistance can be reduced by using antibiotics prudently based on guidelines of antimicrobial stewardship programs (ASPs and various data such as pharmacokinetic (PK and pharmacodynamic (PD properties of antibiotics, diagnostic testing, antimicrobial susceptibility testing (AST, clinical response, and effects on the microbiota, as well as by new antibiotic developments. The controlled use of antibiotics in food animals is another cornerstone among efforts to reduce antibiotic resistance. All major resistance-control strategies recommend education for patients, children (e.g., through schools and day care, the public, and relevant healthcare professionals (e.g., primary-care physicians, pharmacists, and medical students regarding unique features of bacterial infections and antibiotics, prudent antibiotic prescribing as a positive construct, and personal hygiene (e.g., handwashing. The problem of antibiotic resistance can be minimized only by concerted efforts of all members of society for ensuring the continued efficiency of antibiotics.

  15. A minimally invasive smile enhancement.

    Science.gov (United States)

    Peck, Fred H

    2014-01-01

    Minimally invasive dentistry refers to a wide variety of dental treatments. On the restorative aspect of dental procedures, direct resin bonding can be a very conservative treatment option for the patient. When tooth structure does not need to be removed, the patient benefits. Proper treatment planning is essential to determine how conservative the restorative treatment will be. This article describes the diagnosis, treatment options, and procedural techniques in the restoration of 4 maxillary anterior teeth with direct composite resin. The procedural steps are reviewed with regard to placing the composite and the variety of colors needed to ensure a natural result. Finishing and polishing of the composite are critical to ending with a natural looking dentition that the patient will be pleased with for many years.

  16. A discretized algorithm for the solution of a constrained, continuous ...

    African Journals Online (AJOL)

    A discretized algorithm for the solution of a constrained, continuous quadratic control problem. ... The results obtained show that the Discretized constrained algorithm (DCA) is much more accurate and more efficient than some of these techniques, particularly the FSA. Journal of the Nigerian Association of Mathematical ...

  17. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  18. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  19. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  20. Regional Responses to Constrained Water Availability

    Science.gov (United States)

    Cui, Y.; Calvin, K. V.; Hejazi, M. I.; Clarke, L.; Kim, S. H.; Patel, P.

    2017-12-01

    There have been many concerns about water as a constraint to agricultural production, electricity generation, and many other human activities in the coming decades. Nevertheless, how different countries/economies would respond to such constraints has not been explored. Here, we examine the responding mechanism of binding water availability constraints at the water basin level and across a wide range of socioeconomic, climate and energy technology scenarios. Specifically, we look at the change in water withdrawals between energy, land-use and other sectors within an integrated framework, by using the Global Change Assessment Model (GCAM) that also endogenizes water use and allocation decisions based on costs. We find that, when water is taken into account as part of the production decision-making, countries/basins in general fall into three different categories, depending on the change of water withdrawals and water re-allocation between sectors. First, water is not a constraining factor for most of the basins. Second, advancements in water-saving technologies of the electricity generation cooling systems are sufficient of reducing water withdrawals to meet binding water availability constraints, such as in China and the EU-15. Third, water-saving in the electricity sector alone is not sufficient and thus cannot make up the lowered water availability from the binding case; for example, many basins in Pakistan, Middle East and India have to largely reduce irrigated water withdrawals by either switching to rain-fed agriculture or reducing production. The dominant responding strategy for individual countries/basins is quite robust across the range of alternate scenarios that we test. The relative size of water withdrawals between energy and agriculture sectors is one of the most important factors that affect the dominant mechanism.

  1. Constraining Cosmic Evolution of Type Ia Supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  2. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  3. Data-Driven Security-Constrained OPF

    DEFF Research Database (Denmark)

    Thams, Florian; Halilbasic, Lejla; Pinson, Pierre

    2017-01-01

    considerations, while being less conservative than current approaches. Our approach can be scalable for large systems, accounts explicitly for power system security, and enables the electricity market to identify a cost-efficient dispatch avoiding redispatching actions. We demonstrate the performance of our......In this paper we unify electricity market operations with power system security considerations. Using data-driven techniques, we address both small signal stability and steady-state security, derive tractable decision rules in the form of line flow limits, and incorporate the resulting constraints...... in market clearing algorithms. Our goal is to minimize redispatching actions, and instead allow the market to determine the most cost-efficient dispatch while considering all security constraints. To maintain tractability of our approach we perform our security assessment offline, examining large datasets...

  4. Auction dynamics: A volume constrained MBO scheme

    Science.gov (United States)

    Jacobs, Matt; Merkurjev, Ekaterina; Esedoǧlu, Selim

    2018-02-01

    We show how auction algorithms, originally developed for the assignment problem, can be utilized in Merriman, Bence, and Osher's threshold dynamics scheme to simulate multi-phase motion by mean curvature in the presence of equality and inequality volume constraints on the individual phases. The resulting algorithms are highly efficient and robust, and can be used in simulations ranging from minimal partition problems in Euclidean space to semi-supervised machine learning via clustering on graphs. In the case of the latter application, numerous experimental results on benchmark machine learning datasets show that our approach exceeds the performance of current state-of-the-art methods, while requiring a fraction of the computation time.

  5. Cyclone Simulation via Action Minimization

    Science.gov (United States)

    Plotkin, D. A.; Weare, J.; Abbot, D. S.

    2016-12-01

    A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for

  6. A Heuristic Algorithm for Constrain Single-Source Problem with Constrained Customers

    Directory of Open Access Journals (Sweden)

    S. A. Raisi Dehkordi∗

    2012-09-01

    Full Text Available The Fermat-Weber location problem is to find a point in R n that minimizes the sum of the weighted Euclidean distances from m given points in R n . In this paper we consider the Fermat-Weber problem of one new facilitiy with respect to n unknown customers in order to minimizing the sum of transportation costs between this facility and the customers. We assumed that each customer is located in a nonempty convex closed bounded subset of R n .

  7. Minimalism through intraoperative functional mapping.

    Science.gov (United States)

    Berger, M S

    1996-01-01

    Intraoperative stimulation mapping may be used to avoid unnecessary risk to functional regions subserving language and sensori-motor pathways. Based on the data presented here, language localization is variable in the entire population, with only certainty existing for the inferior frontal region responsible for motor speech. Anatomical landmarks such as the anterior temporal tip for temporal lobe language sites and the posterior aspect of the lateral sphenoid wing for the frontal lobe language zones are unreliable in avoiding postoperative aphasias. Thus, individual mapping to identify essential language sites has the greatest likelihood of avoiding permanent deficits in naming, reading, and motor speech. In a similar approach, motor and sensory pathways from the cortex and underlying white matter may be reliably stimulated and mapped in both awake and asleep patients. Although these techniques require an additional operative time and equipment nominally priced, the result is often gratifying, as postoperative morbidity has been greatly reduced in the process of incorporating these surgical strategies. The patients quality of life is improved in terms of seizure control, with or without antiepileptic drugs. This avoids having to perform a second costly operative procedure, which is routinely done when extraoperative stimulation and recording is done via subdural grids. In addition, an aggressive tumor resection at the initial operation lengthens the time to tumor recurrence and often obviates the need for a subsequent reoperation. Thus, intraoperative functional mapping may be best alluded to as a surgical technique that results in "minimalism in the long term".

  8. Against explanatory minimalism in psychiatry

    Directory of Open Access Journals (Sweden)

    Tim eThornton

    2015-12-01

    Full Text Available The idea that psychiatry contains, in principle, a series of levels of explanation has been criticised both as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell’s criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation respectively and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein’s Zettel. But attention to the context of Wittgenstein’s remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of level of explanation. Only in a context broader than the one provided by interventionism is the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  9. Against Explanatory Minimalism in Psychiatry.

    Science.gov (United States)

    Thornton, Tim

    2015-01-01

    The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  10. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  11. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  12. Inversion of Love wave phase velocity using smoothness-constrained least-squares technique; Heikatsuka seiyakutsuki saisho jijoho ni yoru love ha iso sokudo no inversion

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)

    1996-10-01

    Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.

  13. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  14. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  15. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  16. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  17. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  18. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  19. Network constrained wind integration on Vancouver Island

    International Nuclear Information System (INIS)

    Maddaloni, Jesse D.; Rowe, Andrew M.; Kooten, G. Cornelis van

    2008-01-01

    The aim of this study is to determine the costs and carbon emissions associated with operating a hydro-dominated electricity generation system (Vancouver Island, Canada) with varying degrees of wind penetration. The focus is to match the wind resource, system demand and abilities of extant generating facilities on a temporal basis, resulting in an operating schedule that minimizes system cost over a given period. This is performed by taking the perspective of a social planner who desires to find the lowest-cost mix of new and existing generation facilities. Unlike other studies, this analysis considers variable efficiency for thermal and hydro-generators, resulting in a fuel cost that varies with respect to generator part load. Since this study and others have shown that wind power may induce a large variance on existing dispatchable generators, forcing more frequent operation at reduced part load, inclusion of increased fuel cost at part load is important when investigating wind integration as it can significantly reduce the economic benefits of utilizing low-cost wind. Results indicate that the introduction of wind power may reduce system operating costs, but this depends heavily on whether the capital cost of the wind farm is considered. For the Vancouver Island mix with its large hydro-component, operating cost was reduced by a maximum of 15% at a wind penetration of 50%, with a negligible reduction in operating cost when the wind farm capital cost was included

  20. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

    Directory of Open Access Journals (Sweden)

    Stephan Zentner

    2014-02-01

    Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

  1. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  2. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  3. Carbon-constrained scenarios. Final report

    International Nuclear Information System (INIS)

    2009-05-01

    This report provides the results of the study entitled 'Carbon-Constrained Scenarios' that was funded by FONDDRI from 2004 to 2008. The study was achieved in four steps: (i) Investigating the stakes of a strong carbon constraint for the industries participating in the study, not only looking at the internal decarbonization potential of each industry but also exploring the potential shifts of the demand for industrial products. (ii) Developing an hybrid modelling platform based on a tight dialog between the sectoral energy model POLES and the macro-economic model IMACLIM-R, in order to achieve a consistent assessment of the consequences of an economy-wide carbon constraint on energy-intensive industrial sectors, while taking into account technical constraints, barriers to the deployment of new technologies and general economic equilibrium effects. (iii) Producing several scenarios up to 2050 with different sets of hypotheses concerning the driving factors for emissions - in particular the development styles. (iv) Establishing an iterative dialog between researchers and industry representatives on the results of the scenarios so as to improve them, but also to facilitate the understanding and the appropriate use of these results by the industrial partners. This report provides the results of the different scenarios computed in the course of the project. It is a partial synthesis of the work that has been accomplished and of the numerous exchanges that this study has induced between modellers and stakeholders. The first part was written in April 2007 and describes the first reference scenario and the first mitigation scenario designed to achieve stabilization at 450 ppm CO 2 at the end of the 21. century. This scenario has been called 'mimetic' because it has been build on the assumption that the ambitious climate policy would coexist with a progressive convergence of development paths toward the current paradigm of industrialized countries: urban sprawl, general

  4. FXR agonist activity of conformationally constrained analogs of GW 4064.

    Science.gov (United States)

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y; Caldwell, Richard D; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Bruce Wisely, G

    2009-08-15

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  5. Automated Precision Maneuvering and Landing in Extreme and Constrained Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous, precise maneuvering and landing in extreme and constrained environments is a key enabler for future NASA missions. Missions to map the interior of a...

  6. Security constrained optimal power flow by modern optimization tools

    African Journals Online (AJOL)

    Security constrained optimal power flow by modern optimization tools. ... International Journal of Engineering, Science and Technology ... If you would like more information about how to print, save, and work with PDFs, Highwire Press ...

  7. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  8. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  9. Synthesis of conformationally constrained peptidomimetics using multicomponent reactions

    NARCIS (Netherlands)

    Scheffelaar, R.; Klein Nijenhuis, R.A.; Paravidino, M.; Lutz, M.; Spek, A.L.; Ehlers, A.W.; de Kanter, F.J.J.; Groen, M.B.; Orru, R.V.A.; Ruijter, E.

    2009-01-01

    A novel modular synthetic approach toward constrained peptidomimetics is reported. The approach involves a highly efficient three-step sequence including two multicomponent reactions, thus allowing unprecedented diversification of both the peptide moieties and the turn-inducing scaffold. The

  10. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  11. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  12. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  13. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael

    2015-01-01

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  14. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  15. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  16. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  17. [Minimally invasive approach for cervical spondylotic radiculopathy].

    Science.gov (United States)

    Ding, Liang; Sun, Taicun; Huang, Yonghui

    2010-01-01

    To summarize the recent minimally invasive approach for cervical spondylotic radiculopathy (CSR). The recent literature at home and abroad concerning minimally invasive approach for CSR was reviewed and summarized. There were two techniques of minimally invasive approach for CSR at present: percutaneous puncture techniques and endoscopic techniques. The degenerate intervertebral disc was resected or nucleolysis by percutaneous puncture technique if CSR was caused by mild or moderate intervertebral disc herniations. The cervical microendoscopic discectomy and foraminotomy was an effective minimally invasive approach which could provide a clear view. The endoscopy techniques were suitable to treat CSR caused by foraminal osteophytes, lateral disc herniations, local ligamentum flavum thickening and spondylotic foraminal stenosis. The minimally invasive procedure has the advantages of simple handling, minimally invasive and low incidence of complications. But the scope of indications is relatively narrow at present.

  18. Corporate tax minimization and stock price reactions

    OpenAIRE

    Blaufus, Kay; Möhlmann, Axel; Schwäbe, Alexander

    2016-01-01

    Tax minimization strategies may lead to significant tax savings, which could, in turn, increase firm value. However, such strategies are also associated with significant costs, such as expected penalties and planning, agency, and reputation costs. The overall impact of firms' tax minimization strategies on firm value is, therefore, unclear. To investigate whether corporate tax minimization increases firm value, we analyze the stock price reaction to news concerning corporate tax avoidance or ...

  19. Safety control and minimization of radioactive wastes

    International Nuclear Information System (INIS)

    Wang Jinming; Rong Feng; Li Jinyan; Wang Xin

    2010-01-01

    Compared with the developed countries, the safety control and minimization of the radwastes in China are under-developed. The research of measures for the safety control and minimization of the radwastes is very important for the safety control of the radwastes, and the reduction of the treatment and disposal cost and environment radiation hazards. This paper has systematically discussed the safety control and the minimization of the radwastes produced in the nuclear fuel circulation, nuclear technology applications and the process of decommission of nuclear facilities, and has provided some measures and methods for the safety control and minimization of the radwastes. (authors)

  20. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  1. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  3. Constrained Quadratic Programming and Neurodynamics-Based Solver for Energy Optimization of Biped Walking Robots

    Directory of Open Access Journals (Sweden)

    Liyang Wang

    2017-01-01

    Full Text Available The application of biped robots is always trapped by their high energy consumption. This paper makes a contribution by optimizing the joint torques to decrease the energy consumption without changing the biped gaits. In this work, a constrained quadratic programming (QP problem for energy optimization is formulated. A neurodynamics-based solver is presented to solve the QP problem. Differing from the existing literatures, the proposed neurodynamics-based energy optimization (NEO strategy minimizes the energy consumption and guarantees the following three important constraints simultaneously: (i the force-moment equilibrium equation of biped robots, (ii frictions applied by each leg on the ground to hold the biped robot without slippage and tipping over, and (iii physical limits of the motors. Simulations demonstrate that the proposed strategy is effective for energy-efficient biped walking.

  4. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  5. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  6. Minimizing electrode contamination in an electrochemical cell

    Science.gov (United States)

    Kim, Yu Seung; Zelenay, Piotr; Johnston, Christina

    2014-12-09

    An electrochemical cell assembly that is expected to prevent or at least minimize electrode contamination includes one or more getters that trap a component or components leached from a first electrode and prevents or at least minimizes them from contaminating a second electrode.

  7. Matthew Arnold and Minimal Competency Testing.

    Science.gov (United States)

    Tuman, Myron C.

    1979-01-01

    Presents arguments by Robert Lowe and Matthew Arnold on the 19th century British "Payment by Results" Plan, whereby schools received funds for students who passed minimal competency tests. Emphasizes that the Victorian experience produced acrimonious teachers with low morale and encourages contemporary minimal testing advocates not to…

  8. Minimally processed fruit salad enriched with Lactobacillus ...

    African Journals Online (AJOL)

    paula

    2015-06-17

    Jun 17, 2015 ... Minimal processing promotes browning of some vegetal tissues due to cell membrane disruption, which results in the release of oxidative enzymes. This study evaluated the efficiency of citric acid, ascorbic acid, sodium metabisulfite and L-cysteine hydrochloride to retard enzymatic browning of minimally.

  9. The minimal manual: is less really more?

    NARCIS (Netherlands)

    Lazonder, Adrianus W.; van der Meij, Hans

    1993-01-01

    Carroll, Smith-Kerker, Ford and Mazur-Rimetz (The minimal manual, Human-Computer Interaction , 3, 123-153, 1987) have introduced the minimal manual as an alternative to standard self-instruction manuals. While their research indicates strong gains, only a few attempts have been made to validate

  10. Y-12 Plant waste minimization strategy

    International Nuclear Information System (INIS)

    Kane, M.A.

    1987-01-01

    The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

  11. Making the Most of Minimalism in Music.

    Science.gov (United States)

    Geiersbach, Frederick J.

    1998-01-01

    Describes the minimalist movement in music. Discusses generations of minimalist musicians and, in general, the minimalist approach. Considers various ways that minimalist strategies can be integrated into the music classroom focusing on (1) minimalism and (2) student-centered composition and principles of minimalism for use with elementary band…

  12. The relative volume growth of minimal submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2002-01-01

    The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature.......The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature....

  13. Specialized minimal PDFs for optimized LHC calculations

    NARCIS (Netherlands)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct

  14. Minimally processed fruit salad enriched with Lactobacillus ...

    African Journals Online (AJOL)

    Minimal processing promotes browning of some vegetal tissues due to cell membrane disruption, which results in the release of oxidative enzymes. This study evaluated the efficiency of citric acid, ascorbic acid, sodium metabisulfite and L-cysteine hydrochloride to retard enzymatic browning of minimally processed fruit ...

  15. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  16. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    RNA polymerase (RNAP) and the DNA template must rotate relative to each other during transcription elongation. In the cell, however, the components of the transcription apparatus may be subject to rotary constraints. For instance, the DNA is divided into topological domains that are delineated...... of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...... constrained. We conclude that transcription of a natural bacterial gene may proceed with high efficiency despite the fact that newly synthesized RNA is entangled around the template in the narrow confines of torsionally constrained supercoiled DNA....

  17. Theories of minimalism in architecture: Post scriptum

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2012-01-01

    Full Text Available Owing to the period of intensive development in the last decade of XX century, architectural phenomenon called Minimalism in Architecture was remembered as the Style of the Nineties, which is characterized, morphologically speaking, by simplicity and formal reduction. Simultaneously with its development in practice, on a theoretical level several dominant interpretative models were able to establish themselves. The new millennium and time distance bring new problems; therefore this paper represents a discussion on specific theorization related to Minimalism in Architecture that can bear the designation of post scriptum, because their development starts after the constitutional period of architectural minimalist discourse. In XXI century theories, the problem of definition of minimalism remains important topic, approached by theorists through resolving on the axis: Modernism - Minimal Art - Postmodernism - Minimalism in Architecture. With regard to this, analyzed texts can be categorized in two groups: 1 texts of affirmative nature and historical-associative approach in which minimalism is identified with anything that is simple and reduced, in an idealizing manner, relied mostly on the existing hypotheses; 2 critically oriented texts, in which authors reconsider adequacy of the very term 'minimalism' in the context of architecture and take a metacritical attitude towards previous texts.

  18. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  19. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  20. Technology applications for radioactive waste minimization

    International Nuclear Information System (INIS)

    Devgun, J.S.

    1994-01-01

    The nuclear power industry has achieved one of the most successful examples of waste minimization. The annual volume of low-level radioactive waste shipped for disposal per reactor has decreased to approximately one-fifth the volume about a decade ago. In addition, the curie content of the total waste shipped for disposal has decreased. This paper will discuss the regulatory drivers and economic factors for waste minimization and describe the application of technologies for achieving waste minimization for low-level radioactive waste with examples from the nuclear power industry

  1. Graphical approach for multiple values logic minimization

    Science.gov (United States)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  2. Minimal covariant observables identifying all pure states

    Energy Technology Data Exchange (ETDEWEB)

    Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)

    2013-09-02

    It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.

  3. Minimal free resolutions over complete intersections

    CERN Document Server

    Eisenbud, David

    2016-01-01

    This book introduces a theory of higher matrix factorizations for regular sequences and uses it to describe the minimal free resolutions of high syzygy modules over complete intersections. Such resolutions have attracted attention ever since the elegant construction of the minimal free resolution of the residue field by Tate in 1957. The theory extends the theory of matrix factorizations of a non-zero divisor, initiated by Eisenbud in 1980, which yields a description of the eventual structure of minimal free resolutions over a hypersurface ring. Matrix factorizations have had many other uses in a wide range of mathematical fields, from singularity theory to mathematical physics.

  4. Transience and capacity of minimal submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We prove explicit lower bounds for the capacity of annular domains of minimal submanifolds P-m in ambient Riemannian spaces N-n with sectional curvatures bounded from above. We characterize the situations in which the lower bounds for the capacity are actually attained. Furthermore we apply...... these bounds to prove that Brownian motion defined on a complete minimal submanifold is transient when the ambient space is a negatively curved Hadamard-Cartan manifold. The proof stems directly from the capacity bounds and also covers the case of minimal submanifolds of dimension m > 2 in Euclidean spaces....

  5. Non-minimal Wu-Yang monopole

    International Nuclear Information System (INIS)

    Balakin, A.B.; Zayats, A.E.

    2007-01-01

    We discuss new exact spherically symmetric static solutions to non-minimally extended Einstein-Yang-Mills equations. The obtained solution to the Yang-Mills subsystem is interpreted as a non-minimal Wu-Yang monopole solution. We focus on the analysis of two classes of the exact solutions to the gravitational field equations. Solutions of the first class belong to the Reissner-Nordstroem type, i.e., they are characterized by horizons and by the singularity at the point of origin. The solutions of the second class are regular ones. The horizons and singularities of a new type, the non-minimal ones, are indicated

  6. Assessment of LANL waste minimization plan

    International Nuclear Information System (INIS)

    Davis, K.D.; McNair, D.A.; Jennrich, E.A.; Lund, D.M.

    1991-04-01

    The objective of this report is to evaluate the Los Alamos National Laboratory (LANL) Waste Minimization Plan to determine if it meets applicable internal (DOE) and regulatory requirements. The intent of the effort is to assess the higher level elements of the documentation to determine if they have been addressed rather than the detailed mechanics of the program's implementation. The requirement for a Waste Minimization Plan is based in several DOE Orders as well as environmental laws and regulations. Table 2-1 provides a list of the major documents or regulations that require waste minimization efforts. The table also summarizes the applicable requirements

  7. Constraining the Surface Energy Balance of Snow in Complex Terrain

    Science.gov (United States)

    Lapo, Karl E.

    Physically-based snow models form the basis of our understanding of current and future water and energy cycles, especially in mountainous terrain. These models are poorly constrained and widely diverge from each other, demonstrating a poor understanding of the surface energy balance. This research aims to improve our understanding of the surface energy balance in regions of complex terrain by improving our confidence in existing observations and improving our knowledge of remotely sensed irradiances (Chapter 1), critically analyzing the representation of boundary layer physics within land models (Chapter 2), and utilizing relatively novel observations to in the diagnoses of model performance (Chapter 3). This research has improved the understanding of the literal and metaphorical boundary between the atmosphere and land surface. Solar irradiances are difficult to observe in regions of complex terrain, as observations are subject to harsh conditions not found in other environments. Quality control methods were developed to handle these unique conditions. These quality control methods facilitated an analysis of estimated solar irradiances over mountainous environments. Errors in the estimated solar irradiance are caused by misrepresenting the effect of clouds over regions of topography and regularly exceed the range of observational uncertainty (up to 80Wm -2) in all regions examined. Uncertainty in the solar irradiance estimates were especially pronounced when averaging over high-elevation basins, with monthly differences between estimates up to 80Wm-2. These findings can inform the selection of a method for estimating the solar irradiance and suggest several avenues of future research for improving existing methods. Further research probed the relationship between the land surface and atmosphere as it pertains to the stable boundary layers that commonly form over snow-covered surfaces. Stable conditions are difficult to represent, especially for low wind speed

  8. Viability of minimal left–right models with discrete symmetries

    Directory of Open Access Journals (Sweden)

    Wouter Dekens

    2014-12-01

    Full Text Available We provide a systematic study of minimal left–right models that are invariant under P, C, and/or CP transformations. Due to the high amount of symmetry such models are quite predictive in the amount and pattern of CP violation they can produce or accommodate at lower energies. Using current experimental constraints some of the models can already be excluded. For this purpose we provide an overview of the experimental constraints on the different left–right symmetric models, considering bounds from colliders, meson-mixing and low-energy observables, such as beta decay and electric dipole moments. The features of the various Yukawa and Higgs sectors are discussed in detail. In particular, we give the Higgs potentials for each case, discuss the possible vacua and investigate the amount of fine-tuning present in these potentials. It turns out that all left–right models with P, C, and/or CP symmetry have a high degree of fine-tuning, unless supplemented with mechanisms to suppress certain parameters. The models that are symmetric under both P and C are not in accordance with present observations, whereas the models with either P, C, or CP symmetry cannot be excluded by data yet. To further constrain and discriminate between the models measurements of B-meson observables at LHCb and B-factories will be especially important, while measurements of the EDMs of light nuclei in particular could provide complementary tests of the LRMs.

  9. Probing gravitational non-minimal coupling with dark energy surveys

    International Nuclear Information System (INIS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Wu, Yi-Peng

    2017-01-01

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G_e_f_f subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G_e_f_f/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G_e_f_f/G < 2.2 x 10"-"5 when combining with Solar System tests. (orig.)

  10. Probing gravitational non-minimal coupling with dark energy surveys

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing (China); National Tsing Hua University, Department of Physics, Hsinchu (China); National Center for Theoretical Sciences, Hsinchu (China); Lee, Chung-Chi [National Center for Theoretical Sciences, Hsinchu (China); Wu, Yi-Peng [Academia Sinica, Institute of Physics, Taipei (China)

    2017-03-15

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G{sub eff} subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G{sub eff}/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G{sub eff}/G < 2.2 x 10{sup -5} when combining with Solar System tests. (orig.)

  11. Waste minimization and pollution prevention awareness plan

    Energy Technology Data Exchange (ETDEWEB)

    1991-05-31

    The purpose of this plan is to document the Lawrence Livermore National Laboratory (LLNL) Waste Minimization and Pollution Prevention Awareness Program. The plan specifies those activities and methods that are or will be employed to reduce the quantity and toxicity of wastes generated at the site. The intent of this plan is to respond to and comply with (DOE's) policy and guidelines concerning the need for pollution prevention. The Plan is composed of a LLNL Waste Minimization and Pollution Prevention Awareness Program Plan and, as attachments, Program- and Department-specific waste minimization plans. This format reflects the fact that waste minimization is considered a line management responsibility and is to be addressed by each of the Programs and Departments. 14 refs.

  12. On ``minimally curved spacetimes'' in general relativity

    OpenAIRE

    Dadhich, Naresh

    1997-01-01

    We consider a spacetime corresponding to uniform relativistic potential analogus to Newtonian potential as an example of ``minimally curved spacetime''. We also consider a radially symmetric analogue of the Rindler spacetime of uniform proper acceleration relative to infinity.

  13. Discrete Curvatures and Discrete Minimal Surfaces

    KAUST Repository

    Sun, Xiang

    2012-01-01

    This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads

  14. Waste minimization and pollution prevention awareness plan

    International Nuclear Information System (INIS)

    1991-01-01

    The purpose of this plan is to document the Lawrence Livermore National Laboratory (LLNL) Waste Minimization and Pollution Prevention Awareness Program. The plan specifies those activities and methods that are or will be employed to reduce the quantity and toxicity of wastes generated at the site. The intent of this plan is to respond to and comply with (DOE's) policy and guidelines concerning the need for pollution prevention. The Plan is composed of a LLNL Waste Minimization and Pollution Prevention Awareness Program Plan and, as attachments, Program- and Department-specific waste minimization plans. This format reflects the fact that waste minimization is considered a line management responsibility and is to be addressed by each of the Programs and Departments. 14 refs

  15. Minimal classical communication and measurement complexity for ...

    Indian Academy of Sciences (India)

    Minimal classical communication and measurement complexity for quantum ... Entanglement; teleportation; secret sharing; information splitting. ... Ahmedabad 380 009, India; Birla Institute of Technology and Science, Pilani 333 031, India ...

  16. A minimal architecture for joint action

    DEFF Research Database (Denmark)

    Vesper, Cordula; Butterfill, Stephen; Knoblich, Günther

    2010-01-01

    What kinds of processes and representations make joint action possible? In this paper we suggest a minimal architecture for joint action that focuses on representations, action monitoring and action prediction processes, as well as ways of simplifying coordination. The architecture spells out...... minimal requirements for an individual agent to engage in a joint action. We discuss existing evidence in support of the architecture as well as open questions that remain to be empirically addressed. In addition, we suggest possible interfaces between the minimal architecture and other approaches...... to joint action. The minimal architecture has implications for theorizing about the emergence of joint action, for human-machine interaction, and for understanding how coordination can be facilitated by exploiting relations between multiple agents’ actions and between actions and the environment....

  17. Providing intraosseous anesthesia with minimal invasion.

    Science.gov (United States)

    Giffin, K M

    1994-08-01

    A new variation of intraosseous anesthesia--crestal anesthesia--that is rapid, site-specific and minimally invasive is presented. The technique uses alveolar crest nutrient canals for anesthetic delivery without penetrating either bone or periodontal ligament.

  18. Minimal genera of open 4-manifolds

    OpenAIRE

    Gompf, Robert E.

    2013-01-01

    We study exotic smoothings of open 4-manifolds using the minimal genus function and its analog for end homology. While traditional techniques in open 4-manifold smoothing theory give no control of minimal genera, we make progress by using the adjunction inequality for Stein surfaces. Smoothings can be constructed with much more control of these genus functions than the compact setting seems to allow. As an application, we expand the range of 4-manifolds known to have exotic smoothings (up to ...

  19. Gravitino problem in minimal supergravity inflation

    Directory of Open Access Journals (Sweden)

    Fuminori Hasegawa

    2017-04-01

    Full Text Available We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  20. Gravitino problem in minimal supergravity inflation

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Fuminori [Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Mukaida, Kyohei [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Nakayama, Kazunori [Department of Physics, Faculty of Science, The University of Tokyo, Bunkyo-ku, Tokyo 133-0033 (Japan); Terada, Takahiro, E-mail: terada@kias.re.kr [School of Physics, Korea Institute for Advanced Study (KIAS), Seoul 02455 (Korea, Republic of); Yamada, Yusuke [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94305 (United States)

    2017-04-10

    We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  1. Flattening the inflaton potential beyond minimal gravity

    Directory of Open Access Journals (Sweden)

    Lee Hyun Min

    2018-01-01

    Full Text Available We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  2. Minimally Invasive Surgery in Thymic Malignances

    Directory of Open Access Journals (Sweden)

    Wentao FANG

    2018-04-01

    Full Text Available Surgery is the most important therapy for thymic malignances. The last decade has seen increasing adoption of minimally invasive surgery (MIS for thymectomy. MIS for early stage thymoma patients has been shown to yield similar oncological results while being helpful in minimize surgical trauma, improving postoperative recovery, and reduce incisional pain. Meanwhile, With the advance in surgical techniques, the patients with locally advanced thymic tumors, preoperative induction therapies or recurrent diseases, may also benefit from MIS in selected cases.

  3. Minimal Function Graphs are not Instrumented

    DEFF Research Database (Denmark)

    Mycroft, Alan; Rosendahl, Mads

    1992-01-01

    The minimal function graph semantics of Jones and Mycroft is a standard denotational semantics modified to include only `reachable' parts of a program. We show that it may be expressed directly in terms of the standard semantics without the need for instrumentation at the expression level and......, in doing so, bring out a connection with strictness. This also makes it possible to prove a stronger theorem of correctness for the minimal function graph semantics....

  4. Minimal Dark Matter in the sky

    International Nuclear Information System (INIS)

    Panci, P.

    2016-01-01

    We discuss some theoretical and phenomenological aspects of the Minimal Dark Matter (MDM) model proposed in 2006, which is a theoretical framework highly appreciated for its minimality and yet its predictivity. We first critically review the theoretical requirements of MDM pointing out generalizations of this framework. Then we review the phenomenology of the originally proposed fermionic hyperchargeless electroweak quintuplet showing its main γ-ray tests.

  5. Coherent Structures and Entropy in Constrained, Modulationally Unstable, Nonintegrable Systems

    International Nuclear Information System (INIS)

    Rumpf, Benno; Newell, Alan C.

    2001-01-01

    Many studies have shown that nonintegrable systems with modulational instabilities constrained by more than one conservation law exhibit universal long time behavior involving large coherent structures in a sea of small fluctuations. We show how this behavior can be explained in detail by simple thermodynamic arguments

  6. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  7. Applications of a Constrained Mechanics Methodology in Economics

    Science.gov (United States)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  8. Excision technique in constrained formulations of Einstein equations: collapse scenario

    International Nuclear Information System (INIS)

    Cordero-Carrión, I; Vasset, N; Novak, J; Jaramillo, J L

    2015-01-01

    We present a new excision technique used in constrained formulations of Einstein equations to deal with black hole in numerical simulations. We show the applicability of this scheme in several scenarios. In particular, we present the dynamical evolution of the collapse of a neutron star to a black hole, using the CoCoNuT code and this excision technique. (paper)

  9. Constraining the evolution of the Hubble Parameter using cosmic chronometers

    Science.gov (United States)

    Dickinson, Hugh

    2017-08-01

    Substantial investment is being made in space- and ground-based missions with the goal of revealing the nature of the observed cosmic acceleration. This is one of the most important unsolved problems in cosmology today.We propose here to constrain the evolution of the Hubble parameter [H(z)] between 1.3 fundamental nature of dark energy.

  10. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  11. Extended shadow test approach for constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; Ariel, A.

    2002-01-01

    Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more

  12. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with adjacent resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  13. Evaluation of constrained mobility for programmability in network management

    NARCIS (Netherlands)

    Bohoris, C.; Liotta, A.; Pavlou, G.; Ambler, A.P.; Calo, S.B.; Kar, G.

    2000-01-01

    In recent years, a significant amount of research work has addressed the use of code mobility in network management. In this paper, we introduce first three aspects of code mobility and argue that constrained mobility offers a natural and easy approach to network management programmability. While

  14. Testing a Constrained MPC Controller in a Process Control Laboratory

    Science.gov (United States)

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  15. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...

  16. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the ... the research focus in the fields of space geodesy and ... ment of GNSS such as GPS, Glonass, Galileo and. Compass, as these ...

  17. Constrained variational calculus for higher order classical field theories

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn, E-mail: cedricmc@icmat.e, E-mail: mdeleon@icmat.e, E-mail: david.martin@icmat.e [Instituto de Ciencias Matematicas, CSIC-UAM-UC3M-UCM, Serrano 123, 28006 Madrid (Spain)

    2010-11-12

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  18. Constrained variational calculus for higher order classical field theories

    International Nuclear Information System (INIS)

    Campos, Cedric M; De Leon, Manuel; De Diego, David MartIn

    2010-01-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  19. Bounds on the capacity of constrained two-dimensional codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2000-01-01

    Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run-l...

  20. The balance of payment-constrained economic growth in Ethiopia ...

    African Journals Online (AJOL)

    The objective of this paper is to empirically test the validity of the simplified version of the balance of payment-constrained economic growth model for Ethiopia during the period 1971-20082. According to the model, economies only grow at a pace allowed by the constraints imposed by the requirement of balance of payment ...

  1. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  2. The Balance of Payment-Constrained Economic Growth in Ethiopia ...

    African Journals Online (AJOL)

    Administrator

    Page 100 financial liberalization and export promotion strategy necessarily lead to better growth performance. Rather, one should consider not only exports of goods and services, but also the income elasticity of imports. The balance of payments-constrained growth model postulates that the rate of growth in any country is ...

  3. Evaluating potentialities and constrains of Problem Based Learning curriculum

    DEFF Research Database (Denmark)

    Guerra, Aida

    2013-01-01

    This paper presents a research design to evaluate Problem Based Learning (PBL) curriculum potentialities and constrains for future changes. PBL literature lacks examples of how to evaluate and analyse established PBL learning environments to address new challenges posed. The research design......) in the curriculum and a mean to choose cases for further case study (third phase)....

  4. Binary classification posed as a quadratically constrained quadratic ...

    Indian Academy of Sciences (India)

    Binary classification is posed as a quadratically constrained quadratic problem and solved using the proposed method. Each class in the binary classification problem is modeled as a multidimensional ellipsoid to forma quadratic constraint in the problem. Particle swarms help in determining the optimal hyperplane or ...

  5. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them ...

  6. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    2008-01-01

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with Adjacent Resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  7. Constrained relationship agency as the risk factor for intimate ...

    African Journals Online (AJOL)

    We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was ...

  8. Chance-constrained optimization of demand response to price signals

    DEFF Research Database (Denmark)

    Dorini, Gianluca Fabio; Pinson, Pierre; Madsen, Henrik

    2013-01-01

    within a recursive least squares (RLS) framework using data measurable at the grid level, in an adaptive fashion. Optimal price signals are generated by embedding the FIR models within a chance-constrained optimization framework. The objective is to keep the price signal as unchanged as possible from...

  9. Solution of a Complex Least Squares Problem with Constrained Phase.

    Science.gov (United States)

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  10. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    Science.gov (United States)

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  11. Constrained control of a once-through boiler with recirculation

    DEFF Research Database (Denmark)

    Trangbæk, K

    2008-01-01

    There is an increasing need to operate power plants at low load for longer periods of time. When a once-through boiler operates at a sufficiently low load, recirculation is introduced, significantly altering the control structure. This paper illustrates the possibilities for using constrained con...

  12. On the Integrated Job Scheduling and Constrained Network Routing Problem

    DEFF Research Database (Denmark)

    Gamst, Mette

    This paper examines the NP-hard problem of scheduling a number of jobs on a finite set of machines such that the overall profit of executed jobs is maximized. Each job demands a number of resources, which must be sent to the executing machine via constrained paths. Furthermore, two resource demand...

  13. Effective Teaching of Economics: A Constrained Optimization Problem?

    Science.gov (United States)

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  14. Vacuum expectation values in a scalar constrained theory

    International Nuclear Information System (INIS)

    Alonso, F.; Julve, J.; Tiemblo, A.

    1985-01-01

    A class of finite Green functions in the context of a scalar constrained theory is studied. In a particular model the one-point GFs show that the vacuum expectation values for some fields vanish while one of them remains finite, a feature exhibited by the Goldstone and Higgs fields respectively. (orig.)

  15. Lagrangian formalism for constrained systems. 2. Gauge symmetries

    International Nuclear Information System (INIS)

    Pyatov, P.N.

    1990-01-01

    Using the Lagrangian formalism for constrained systems all gauge symmetries peculiar for a given Lagrangian system and in establishing the relation between them and the constraints are constructed. Besides, the question about the possible dependence of gauge transformations on accelerations and other higher order time derivatives of coordinates is clarified. 14 refs

  16. Factors constraining accessibility and usage of information among ...

    African Journals Online (AJOL)

    Various factors may negatively impact on information acquisition and utilisation. To improve understanding of the determinants of information acquisition and utilisation, this study investigated the factors constraining accessibility and usage of poultry management information in three rural districts of Tanzania. The findings ...

  17. Constrained Geocast to Support Cooperative Adaptive Cruise Control (CACC) Merging

    NARCIS (Netherlands)

    Klein Wolterink, W.; Heijenk, Geert; Karagiannis, Georgios

    2010-01-01

    In this paper we introduce a new geocasting concept to target vehicles based on where they will be in the direct future, in stead of their current position. We refer to this concept as constrained geocast. This may be useful in situations where vehicles have interdependencies based on (future)

  18. A survey on classical minimal surface theory

    CERN Document Server

    Meeks, William H

    2012-01-01

    Meeks and Pérez present a survey of recent spectacular successes in classical minimal surface theory. The classification of minimal planar domains in three-dimensional Euclidean space provides the focus of the account. The proof of the classification depends on the work of many currently active leading mathematicians, thus making contact with much of the most important results in the field. Through the telling of the story of the classification of minimal planar domains, the general mathematician may catch a glimpse of the intrinsic beauty of this theory and the authors' perspective of what is happening at this historical moment in a very classical subject. This book includes an updated tour through some of the recent advances in the theory, such as Colding-Minicozzi theory, minimal laminations, the ordering theorem for the space of ends, conformal structure of minimal surfaces, minimal annular ends with infinite total curvature, the embedded Calabi-Yau problem, local pictures on the scale of curvature and t...

  19. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  20. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    International Nuclear Information System (INIS)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry

    2015-01-01

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments