WorldWideScience

Sample records for conditional constrained minimization

  1. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  2. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  3. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  4. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  5. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  6. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  7. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  8. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  9. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  10. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  11. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem

    2014-04-18

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.

  12. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

    Directory of Open Access Journals (Sweden)

    David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

    2018-03-01

    Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

  13. Storage of RF photons in minimal conditions

    Science.gov (United States)

    Cromières, J.-P.; Chanelière, T.

    2018-02-01

    We investigate the minimal conditions to store coherently a RF pulse in a material medium. We choose a commercial quartz as a memory support because it is a widely available component with a high Q-factor. Pulse storage is obtained by varying dynamically the light-matter coupling with an analog switch. This parametric driving of the quartz dynamics can be alternatively interpreted as a stopped-light experiment. We obtain an efficiency of 26%, a storage time of 209 μs and a time-to-bandwidth product of 98 by optimizing the pulse temporal shape. The coherent character of the storage is demonstrated. Our goal is to connect different types of memories in the RF and optical domain for quantum information processing. Our motivation is essentially fundamental.

  14. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  15. Wormholes minimally violating the null energy condition

    Energy Technology Data Exchange (ETDEWEB)

    Bouhmadi-López, Mariam [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal); Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt [Centro de Astronomia e Astrofísica da Universidade de Lisboa, Campo Grande, Edifício C8, 1749-016 Lisboa (Portugal)

    2014-11-01

    We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.

  16. Nonlinear Chance Constrained Problems: Optimality Conditions, Regularization and Solvers

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Branda, Martin

    2016-01-01

    Roč. 170, č. 2 (2016), s. 419-436 ISSN 0022-3239 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Chance constrained programming * Optimality conditions * Regularization * Algorithms * Free MATLAB codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.289, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0460909.pdf

  17. Design of a minimally constraining, passively supported gait training exoskeleton: ALEX II.

    Science.gov (United States)

    Winfree, Kyle N; Stegall, Paul; Agrawal, Sunil K

    2011-01-01

    This paper discusses the design of a new, minimally constraining, passively supported gait training exoskeleton known as ALEX II. This device builds on the success and extends the features of the ALEX I device developed at the University of Delaware. Both ALEX (Active Leg EXoskeleton) devices have been designed to supply a controllable torque to a subject's hip and knee joint. The current control strategy makes use of an assist-as-needed algorithm. Following a brief review of previous work motivating this redesign, we discuss the key mechanical features of the new ALEX device. A short investigation was conducted to evaluate the effectiveness of the control strategy and impact of the exoskeleton on the gait of six healthy subjects. This paper concludes with a comparison between the subjects' gait both in and out of the exoskeleton. © 2011 IEEE

  18. Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2012-01-01

    Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.

  19. Topology Optimization for Minimizing the Resonant Response of Plates with Constrained Layer Damping Treatment

    Directory of Open Access Journals (Sweden)

    Zhanpeng Fang

    2015-01-01

    Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.

  20. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    Science.gov (United States)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  1. Minimization under entropy conditions, with applications in lower bound problems

    International Nuclear Information System (INIS)

    Toft, Joachim

    2004-01-01

    We minimize the functional f->∫ afdμ under the entropy condition E(f)=-∫ f log fdμ≥E, ∫ fdμ=1 and f≥0, where E is a member of R is fixed. We prove that the minimum is attained for f=e -sa /∫ e -sa dμ, where s is a member of R is chosen such that E(f)=E. We apply the result on minimizing problems in pseudodifferential calculus, where we minimize the harmonic oscillator

  2. Stringent tests of constrained Minimal Flavor Violation through ΔF=2 transitions

    International Nuclear Information System (INIS)

    Buras, Andrzej J.; Girrbach, Jennifer

    2013-01-01

    New Physics contributions to ΔF=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S 0 (x t ). With already very precise experimental values of ε K , ΔM d , ΔM s and precise values of the CP-asymmetry S ψK S and of B K entering the evaluation of ε K , the future of CMFV in the ΔF = 2 sector depends crucially on the values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, γ, F B s √(B B s ) and F B d √(B B d ). The ratio ξ of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with ΔM s / ΔM d and S ψK S to determine the range of the angle γ in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke ε K vertical stroke and ΔM d allows to determine the favorite CMFV values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, F B s √(B B s ) and F B d √(B B d ) as functions of S(v) and γ. The vertical stroke V cb vertical stroke 4 dependence of ε K allows to determine vertical stroke V cb vertical stroke for a given S(v) and γ with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V ub vertical stroke, vertical stroke V td vertical stroke and vertical stroke V ts vertical stroke that are automatically determined as functions of S(v) and γ. We derive correlations between F B s √(B B s ) and F B d √(B B d ), vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke and γ that should be tested in the coming years. Typically F B s √(B B s ) and F B d √(B B d ) have to be lower than their present lattice values, while vertical stroke V cb vertical stroke has to

  3. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges

  4. Stringent tests of constrained Minimal Flavor Violation through {Delta}F=2 transitions

    Energy Technology Data Exchange (ETDEWEB)

    Buras, Andrzej J. [TUM-IAS, Garching (Germany); Girrbach, Jennifer [TUM, Physik Department, Garching (Germany)

    2013-09-15

    New Physics contributions to {Delta}F=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S{sub 0}(x{sub t}). With already very precise experimental values of {epsilon}{sub K}, {Delta}M{sub d}, {Delta}M{sub s} and precise values of the CP-asymmetry S{sub {psi}K{sub S}} and of B{sub K} entering the evaluation of {epsilon}{sub K}, the future of CMFV in the {Delta}F = 2 sector depends crucially on the values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, {gamma}, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}). The ratio {xi} of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with {Delta}M{sub s} / {Delta}M{sub d} and S{sub {psi}K{sub S}} to determine the range of the angle {gamma} in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke {epsilon}{sub K} vertical stroke and {Delta}M{sub d} allows to determine the favorite CMFV values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}) as functions of S(v) and {gamma}. The vertical stroke V{sub cb} vertical stroke {sup 4} dependence of {epsilon}{sub K} allows to determine vertical stroke V{sub cb} vertical stroke for a given S(v) and {gamma} with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V{sub ub} vertical stroke, vertical stroke V{sub td} vertical stroke and vertical stroke V{sub ts} vertical stroke that are automatically determined as functions of S(v) and {gamma}. We derive correlations

  5. Varietal improvement of irrigated rice under minimal water conditions

    International Nuclear Information System (INIS)

    Abdul Rahim Harun; Marziah Mahmood; Sobri Hussein

    2010-01-01

    Varietal improvement of irrigated rice under minimal water condition is a research project under Program Research of Sustainable Production of High Yielding Irrigated Rice under Minimal Water Input (IRPA- 01-01-03-0000/ PR0068/ 0504). Several agencies were involved in this project such as Malaysian Nuclear Agency (MNA), Malaysian Agricultural Research and Development Institute (MARDI), Universiti Putra Malaysia (UPM) and Ministry of Agriculture (MOA). The project started in early 2004 with approved IRPA fund of RM 275,000.00 for 3 years. The main objective of the project is to generate superior genotypes for minimal water requirement through induced mutation techniques. A cultivated rice Oryza sativa cv MR219 treated with gamma radiation at 300 and 400 Gray were used in the experiment. Two hundred gm M2 seeds from each dose were screened under minimal water stress in greenhouse at Mardi Seberang Perai. Five hundred panicles with good filled grains were selected for paddy field screening with simulate precise water stress regime. Thirty eight potential lines with required adaptive traits were selected in M3. After several series of selection, 12 promising mutant line were observed tolerance to minimal water stress where two promising mutant lines designated as MR219-4 and MR219-9 were selected for further testing under several stress environments. (author)

  6. Constraining non-minimally coupled tachyon fields by the Noether symmetry

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2009-01-01

    A model for a homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. The Noether symmetry is used to find expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is responsible for the decelerated-accelerated transition period.

  7. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    International Nuclear Information System (INIS)

    Núñez, Darío; Zavala, Jesús; Nellen, Lukas; Sussman, Roberto A; Cabral-Rosetti, Luis G; Mondragón, Myriam

    2008-01-01

    We derive an expression for the entropy of a dark matter halo described using a Navarro–Frenk–White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2σ bounds for the abundance of dark matter: 0.112≤Ω DM h 2 ≤0.122, we are able to clearly identify validity regions among the values of tanβ, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tanβ are not favored; only for tan β ≃ 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m χ ≥141 GeV

  8. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC

    2008-05-15

    We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.

  9. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  10. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization

    International Nuclear Information System (INIS)

    Sidky, Emil Y; Pan Xiaochuan

    2008-01-01

    An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories

  11. Portfolio balancing and risk adjusted values under constrained budget conditions

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1996-01-01

    For a given hydrocarbon exploration opportunity, the influences of value, cost, success probability and corporate risk tolerance provide an optimal working interest that should be taken in the opportunity in order to maximize the risk adjusted value. When several opportunities are available, but when the total budget is insufficient to take optimal working interest in each, an analytic procedure is given for optimizing the risk adjusted value of the total portfolio; the relevant working interests are also derived based on a cost exposure constraint. Several numerical illustrations are provided to exhibit the use of the method under different budget conditions, and with different numbers of available opportunities. When value, cost, success probability, and risk tolerance are uncertain for each and every opportunity, the procedure is generalized to allow determination of probable optimal risk adjusted value for the total portfolio and, at the same time, the range of probable working interest that should be taken in each opportunity is also provided. The result is that the computations of portfolio balancing can be done quickly in either deterministic or probabilistic manners on a small calculator, thereby providing rapid assessments of opportunities and their worth to a corporation. (Author)

  12. Minimization for conditional simulation: Relationship to optimal transport

    Science.gov (United States)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  13. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  14. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  15. Uniqueness conditions for constrained three-way factor decompositions with linearly dependent loadings

    NARCIS (Netherlands)

    Stegeman, Alwin; De Almeida, Andre L. F.

    2009-01-01

    In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that

  16. HIFU scattering by the ribs: constrained optimisation with a complex surface impedance boundary condition

    Science.gov (United States)

    Gélat, P.; ter Haar, G.; Saffari, N.

    2014-04-01

    High intensity focused ultrasound (HIFU) enables highly localised, non-invasive tissue ablation and its efficacy has been demonstrated in the treatment of a range of cancers, including those of the kidney, prostate and breast. HIFU offers the ability to treat deep-seated tumours locally, and potentially bears fewer side effects than more established treatment modalities such as resection, chemotherapy and ionising radiation. There remains however a number of significant challenges which currently hinder its widespread clinical application. One of these challenges is the need to transmit sufficient energy through the ribcage to ablate tissue at the required foci whilst minimising the formation of side lobes and sparing healthy tissue. Ribs both absorb and reflect ultrasound strongly. This sometimes results in overheating of bone and overlying tissue during treatment, leading to skin burns. Successful treatment of a patient with tumours in the upper abdomen therefore requires a thorough understanding of the way acoustic and thermal energy is deposited. Previously, a boundary element (BE) approach based on a Generalised Minimal Residual (GMRES) implementation of the Burton-Miller formulation was developed to predict the field of a multi-element HIFU array scattered by human ribs, the topology of which was obtained from CT scan data [1]. Dissipative mechanisms inside the propagating medium have since been implemented, together with a complex surface impedance condition at the surface of the ribs. A reformulation of the boundary element equations as a constrained optimisation problem was carried out to determine the complex surface velocities of a multi-element HIFU array which generated the acoustic pressure field that best fitted a required acoustic pressure distribution in a least-squares sense. This was done whilst ensuring that an acoustic dose rate parameter at the surface of the ribs was kept below a specified threshold. The methodology was tested at an

  17. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  18. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    International Nuclear Information System (INIS)

    Choudhury, Sayantan

    2014-01-01

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f NL , τ NL and g NL for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A CMB , using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c s ≤1, and tensor to scalar, 10 −22 ≤r ⋆ ≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f NL ≤8.5, O(75−150)≤τ NL ≤2800 and O(17.4−34.7)≤g NL ≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A CMB ≤0.09

  19. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India)

    2014-04-15

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f{sub NL} , τ{sub NL} and g{sub NL} for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A{sub CMB}, using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c{sub s}≤1, and tensor to scalar, 10{sup −22}≤r{sub ⋆}≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f{sub NL}≤8.5, O(75−150)≤τ{sub NL}≤2800 and O(17.4−34.7)≤g{sub NL}≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A{sub CMB}≤0.09.

  20. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  1. Minimization of heat slab nodes with higher order boundary conditions

    International Nuclear Information System (INIS)

    Solbrig, C.W.

    1992-01-01

    The accuracy of a numerical solution can be limited by the numerical approximation to the boundary conditions rather than the accuracy of the equations which describe the interior. The study presented in this paper compares the results from two different numerical formulations of the convective boundary condition on the face of a heat transfer slab. The standard representation of the boundary condition in a test problem yielded an unacceptable error even when the heat transfer slab was partitioned into over 300 nodes. A higher order boundary condition representation was obtained by using a second order approximation for the first derivative at the boundary and combining it with the general equation used for inner nodes. This latter formulation produced reasonable results when as few as ten nodes were used

  2. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  3. Adler's Zero Condition and a Minimally Symmetric Higgs Boson.

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Long ago Coleman, Callan, Wess and Zumino (CCWZ) constructed the nonlinear sigma model lagrangian based on a general coset G/H. I discuss how CCWZ lagrangian can be (re)derived using only IR data, by imposing Adler's zero condition in conjunction with the unbroken symmetry group H. Applying the technique to the case of composite Higgs models allows one to derive a universal lagrangian for all models where the Higgs arises as a pseudo-Nambu-Goldston boson, up to symmetry-breaking effects.

  4. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  5. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  6. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  7. Constraining reconnection region conditions using imaging and spectroscopic analysis of a coronal jet

    Science.gov (United States)

    Brannon, Sean; Kankelborg, Charles

    2017-08-01

    Coronal jets typically appear as thin, collimated structures in EUV and X-ray wavelengths, and are understood to be initiated by magnetic reconnection in the lower corona or upper chromosphere. Plasma that is heated and accelerated upward into coronal jets may therefore carry indirect information on conditions in the reconnection region and current sheet located at the jet base. On 2017 October 14, the Interface Region Imaging Spectrograph (IRIS) and Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) observed a series of jet eruptions originating from NOAA AR 12599. The jet structure has a length-to-width ratio that exceeds 50, and remains remarkably straight throughout its evolution. Several times during the observation bright blobs of plasma are seen to erupt upward, ascending and subsequently descending along the structure. These blobs are cotemporal with footpoint and arcade brightenings, which we believe indicates multiple episodes of reconnection at the structure base. Through imaging and spectroscopic analysis of jet and footpoint plasma we determine a number of properties, including the line-of-sight inclination, the temperature and density structure, and lift-off velocities and accelerations of jet eruptions. We use these properties to constrain the geometry of the jet structure and conditions in reconnection region.

  8. Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    Purpose: The authors develop and investigate iterative image reconstruction algorithms based on data-discrepancy minimization with a total-variation (TV) constraint. The various algorithms are derived with different data-discrepancy measures reflecting the maximum likelihood (ML) principle......: An incremental algorithm framework is developed for this purpose. The instances of the incremental algorithms are derived for solving optimization problems including a data fidelity objective function combined with a constraint on the image TV. For the data fidelity term the authors, compare application....... Simulations demonstrate the iterative algorithms and the resulting image statistical properties for low-dose CT data acquired with sparse projection view angle sampling. Of particular interest is to quantify improvement of image statistical properties by use of the ML data fidelity term. Methods...

  9. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  10. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    Science.gov (United States)

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  11. Inclusions in diamonds constrain thermo-chemical conditions during Mesozoic metasomatism of the Kaapvaal cratonic mantle

    Science.gov (United States)

    Weiss, Yaakov; Navon, Oded; Goldstein, Steven L.; Harris, Jeff W.

    2018-06-01

    Fluid/melt inclusions in diamonds, which were encapsulated during a metasomatic event and over a short period of time, are isolated from their surrounding mantle, offering the opportunity to constrain changes in the sub-continental lithospheric mantle (SCLM) that occurred during individual thermo-chemical events, as well as the composition of the fluids involved and their sources. We have analyzed a suite of 8 microinclusion-bearing diamonds from the Group I De Beers Pool kimberlites, South Africa, using FTIR, EPMA and LA-ICP-MS. Seven of the diamonds trapped incompatible-element-enriched saline high density fluids (HDFs), carry peridotitic mineral microinclusions, and substitutional nitrogen almost exclusively in A-centers. This low-aggregation state of nitrogen indicates a short mantle residence times and/or low mantle ambient temperature for these diamonds. A short residence time is favored because, elevated thermal conditions prevailed in the South African lithosphere during and following the Karoo flood basalt volcanism at ∼180 Ma, thus the saline metasomatism must have occurred close to the time of kimberlite eruptions at ∼85 Ma. Another diamond encapsulated incompatible-element-enriched silicic HDFs and has 25% of its nitrogen content residing in B-centers, implying formation during an earlier and different metasomatic event that likely relates to the Karoo magmatism at ca. 180 Ma. Thermometry of mineral microinclusions in the diamonds carrying saline HDFs, based on Mg-Fe exchange between garnet-orthopyroxene (Opx)/clinopyroxene (Cpx)/olivine and the Opx-Cpx thermometer, yield temperatures between 875-1080 °C at 5 GPa. These temperatures overlap with conditions recorded by touching inclusion pairs in diamonds from the De Beers Pool kimberlites, which represent the mantle ambient conditions just before eruption, and are altogether lower by 150-250 °C compared to P-T gradients recorded by peridotite xenoliths from the same locality. Oxygen fugacity (fO2

  12. Analyses of an air conditioning system with entropy generation minimization and entransy theory

    International Nuclear Information System (INIS)

    Wu Yan-Qiu; Cai Li; Wu Hong-Juan

    2016-01-01

    In this paper, based on the generalized heat transfer law, an air conditioning system is analyzed with the entropy generation minimization and the entransy theory. Taking the coefficient of performance (denoted as COP ) and heat flow rate Q out which is released into the room as the optimization objectives, we discuss the applicabilities of the entropy generation minimization and entransy theory to the optimizations. Five numerical cases are presented. Combining the numerical results and theoretical analyses, we can conclude that the optimization applicabilities of the two theories are conditional. If Q out is the optimization objective, larger entransy increase rate always leads to larger Q out , while smaller entropy generation rate does not. If we take COP as the optimization objective, neither the entropy generation minimization nor the concept of entransy increase is always applicable. Furthermore, we find that the concept of entransy dissipation is not applicable for the discussed cases. (paper)

  13. Strain development in a filled epoxy resin curing under constrained and unconstrained conditions as assessed by Fibre Bragg Grating sensors

    Directory of Open Access Journals (Sweden)

    2007-04-01

    Full Text Available The influence of adhesion to the mould wall on the released strain of a highly filled anhydride cured epoxy resin (EP, which was hardened in an aluminium mould under constrained and unconstrained condition, was investigated. The shrinkage-induced strain was measured by fibre optical sensing technique. Fibre Bragg Grating (FBG sensors were embedded into the curing EP placed in a cylindrical mould cavity. The cure-induced strain signals were detected in both, vertical and horizontal directions, during isothermal curing at 75 °C for 1000 minutes. A huge difference in the strain signal of both directions could be detected for the different adhesion conditions. Under non-adhering condition the horizontal and vertical strain-time traces were practically identical resulting in a compressive strain at the end of about 3200 ppm, which is a proof of free or isotropic shrinking. However, under constrained condition the horizontal shrinkage in the EP was prevented due to its adhesion to the mould wall. So, the curing material shrunk preferably in vertical direction. This resulted in much higher released compressive strain signals in vertical (10430 ppm than in horizontal (2230 ppm direction. The constrained cured EP resins are under inner stresses. Qualitative information on the residual stress state in the molding was deduced by exploiting the birefringence of the EP.

  14. Conditioned pain modulation is minimally influenced by cognitive evaluation or imagery of the conditioning stimulus

    Directory of Open Access Journals (Sweden)

    Bernaba M

    2014-11-01

    Full Text Available Mario Bernaba, Kevin A Johnson, Jiang-Ti Kong, Sean MackeyStanford Systems Neuroscience and Pain Laboratory, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USAPurpose: Conditioned pain modulation (CPM is an experimental approach for probing endogenous analgesia by which one painful stimulus (the conditioning stimulus may inhibit the perceived pain of a subsequent stimulus (the test stimulus. Animal studies suggest that CPM is mediated by a spino–bulbo–spinal loop using objective measures such as neuronal firing. In humans, pain ratings are often used as the end point. Because pain self-reports are subject to cognitive influences, we tested whether cognitive factors would impact on CPM results in healthy humans.Methods: We conducted a within-subject, crossover study of healthy adults to determine the extent to which CPM is affected by 1 threatening and reassuring evaluation and 2 imagery alone of a cold conditioning stimulus. We used a heat stimulus individualized to 5/10 on a visual analog scale as the testing stimulus and computed the magnitude of CPM by subtracting the postconditioning rating from the baseline pain rating of the heat stimulus.Results: We found that although evaluation can increase the pain rating of the conditioning stimulus, it did not significantly alter the magnitude of CPM. We also found that imagery of cold pain alone did not result in statistically significant CPM effect.Conclusion: Our results suggest that CPM is primarily dependent on sensory input, and that the cortical processes of evaluation and imagery have little impact on CPM. These findings lend support for CPM as a useful tool for probing endogenous analgesia through subcortical mechanisms.Keywords: conditioned pain modulation, endogenous analgesia, evaluation, imagery, cold presser test, CHEPS, contact heat-evoked potential stimulator

  15. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  16. Using finite element modelling to examine the flow process and temperature evolution in HPT under different constraining conditions

    International Nuclear Information System (INIS)

    Pereira, P H R; Langdon, T G; Figueiredo, R B; Cetlin, P R

    2014-01-01

    High-pressure torsion (HPT) is a metal-working technique used to impose severe plastic deformation into disc-shaped samples under high hydrostatic pressures. Different HPT facilities have been developed and they may be divided into three distinct categories depending upon the configuration of the anvils and the restriction imposed on the lateral flow of the samples. In the present paper, finite element simulations were performed to compare the flow process, temperature, strain and hydrostatic stress distributions under unconstrained, quasi-constrained and constrained conditions. It is shown there are distinct strain distributions in the samples depending on the facility configurations and a similar trend in the temperature rise of the HPT workpieces

  17. Local climatic conditions constrain soil yeast diversity patterns in Mediterranean forests, woodlands and scrub biome.

    Science.gov (United States)

    Yurkov, Andrey M; Röhl, Oliver; Pontes, Ana; Carvalho, Cláudia; Maldonado, Cristina; Sampaio, José Paulo

    2016-02-01

    Soil yeasts represent a poorly known fraction of the soil microbiome due to limited ecological surveys. Here, we provide the first comprehensive inventory of cultivable soil yeasts in a Mediterranean ecosystem, which is the leading biodiversity hotspot for vascular plants and vertebrates in Europe. We isolated and identified soil yeasts from forested sites of Serra da Arrábida Natural Park (Portugal), representing the Mediterranean forests, woodlands and scrub biome. Both cultivation experiments and the subsequent species richness estimations suggest the highest species richness values reported to date, resulting in a total of 57 and 80 yeast taxa, respectively. These values far exceed those reported for other forest soils in Europe. Furthermore, we assessed the response of yeast diversity to microclimatic environmental factors in biotopes composed of the same plant species but showing a gradual change from humid broadleaf forests to dry maquis. We observed that forest properties constrained by precipitation level had strong impact on yeast diversity and on community structure and lower precipitation resulted in an increased number of rare species and decreased evenness values. In conclusion, the structure of soil yeast communities mirrors the environmental factors that affect aboveground phytocenoses, aboveground biomass and plant projective cover. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Off-wall boundary conditions for turbulent flows obtained from buffer-layer minimal flow units

    Science.gov (United States)

    Garcia-Mayoral, Ricardo; Pierce, Brian; Wallace, James

    2012-11-01

    There is strong evidence that the transport processes in the buffer region of wall-bounded turbulence are common across various flow configurations, even in the embryonic turbulence in transition (Park et al., Phys. Fl. 24). We use this premise to develop off-wall boundary conditions for turbulent simulations. Boundary conditions are constructed from DNS databases using periodic minimal flow units and reduced order modeling. The DNS data was taken from a channel at Reτ = 400 and a zero-pressure gradient transitional boundary layer (Sayadi et al., submitted to J . FluidMech .) . Both types of boundary conditions were first tested on a DNS of the core of the channel flow with the aim of extending their application to LES and to spatially evolving flows. 2012 CTR Summer Program.

  19. Optimizing cutting conditions on sustainable machining of aluminum alloy to minimize power consumption

    Science.gov (United States)

    Nur, Rusdi; Suyuti, Muhammad Arsyad; Susanto, Tri Agus

    2017-06-01

    Aluminum is widely utilized in the industrial sector. There are several advantages of aluminum, i.e. good flexibility and formability, high corrosion resistance and electrical conductivity, and high heat. Despite of these characteristics, however, pure aluminum is rarely used because of its lacks of strength. Thus, most of the aluminum used in the industrial sectors was in the form of alloy form. Sustainable machining can be considered to link with the transformation of input materials and energy/power demand into finished goods. Machining processes are responsible for environmental effects accepting to their power consumption. The cutting conditions have been optimized to minimize the cutting power, which is the power consumed for cutting. This paper presents an experimental study of sustainable machining of Al-11%Si base alloy that was operated without any cooling system to assess the capacity in reducing power consumption. The cutting force was measured and the cutting power was calculated. Both of cutting force and cutting power were analyzed and modeled by using the central composite design (CCD). The result of this study indicated that the cutting speed has an effect on machining performance and that optimum cutting conditions have to be determined, while sustainable machining can be followed in terms of minimizing power consumption and cutting force. The model developed from this study can be used for evaluation process and optimization to determine optimal cutting conditions for the performance of the whole process.

  20. Maintaining reduced noise levels in a resource-constrained neonatal intensive care unit by operant conditioning.

    Science.gov (United States)

    Ramesh, A; Denzil, S B; Linda, R; Josephine, P K; Nagapoornima, M; Suman Rao, P N; Swarna Rekha, A

    2013-03-01

    To evaluate the efficacy of operant conditioning in sustaining reduced noise levels in the neonatal intensive care unit (NICU). Quasi-experimental study on quality of care. Level III NICU of a teaching hospital in south India. 26 staff employed in the NICU. (7 Doctors, 13 Nursing staff and 6 Nursing assistants). Operant conditioning of staff activity for 6 months. This method involves positive and negative reinforcement to condition the staff to modify noise generating activities. Comparing noise levels in decibel: A weighted [dB (A)] before conditioning with levels at 18 and 24 months after conditioning. Decibel: A weighted accounts for noise that is audible to human ears. Operant conditioning for 6 months sustains the reduced noise levels to within 62 dB in ventilator room 95% CI: 60.4 - 62.2 and isolation room (95% CI: 55.8 - 61.5). In the preterm room, noise can be maintained within 52 dB (95% CI: 50.8 - 52.6). This effect is statistically significant in all the rooms at 18 months (P = 0.001). At 24 months post conditioning there is a significant rebound of noise levels by 8.6, 6.7 and 9.9 dB in the ventilator, isolation and preterm room, respectively (P =0.001). Operant conditioning for 6 months was effective in sustaining reduced noise levels. At 18 months post conditioning, the noise levels were maintained within 62 dB (A), 60 dB (A) and 52 dB (A) in the ventilator, isolation and pre-term room, respectively. Conditioning needs to be repeated at 12 months in the ventilator room and at 18 months in the other rooms.

  1. Minimizing the ILL-conditioning in the analysis by gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Halisson Alberdan C.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Moreira, Icaro Valgueiro M., E-mail: hacc@cin.ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: rmas@cin.ufpe.br, E-mail: ivmm@cin.ufpe.br, E-mail: ccd@ufpe.br, E-mail: eal@cin.ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker, E-mail: lker.Meric@ift.uib.no [University Of Bergen (Norway)

    2015-07-01

    A non-invasive method which can be employed for elemental analysis is the Prompt-Gamma Neutron Activation Analysis. The aim is to estimate the mass fractions of the different constituent elements present in the unknown sample basing its estimations on the energies of all the photopeaks in their spectra. Two difficulties arise in this approach: the constituents are unknown, and the composed spectrum of the unknown sample is a nonlinear combination of the spectra of its constituents (which are called libraries). An iterative method that has become popular is the Monte Carlo Library Least Squares. One limitation with this method is that the amount of noise present in the spectra is not negligible, and the magnitude differences in the photon counting produce a bad conditioning in the covariance matrix employed by the least squares method, affecting the numerical stability of the method. A method for minimizing the numerical instability provoked by noisy spectra is proposed. Corresponding parts of different spectra are selected as to minimize the condition number of the resulting covariance matrix. This idea is supported by the assumption that the unknown spectrum is a linear combination of its constituent's spectra, and the fact that the amount of constituents is so small (typically ve of them). The selection of spectrum parts is done through Greedy Randomized Adaptive Search Procedures, where the cost function is the condition number that derives from the covariance matrix produced out of the selected parts. A QR factorization is also applied to the nal covariance matrix to reduce further its condition number, and transferring part of its bad conditioning to the basis conversion matrix. (author)

  2. Minimizing the ILL-conditioning in the analysis by gamma radiation

    International Nuclear Information System (INIS)

    Cardoso, Halisson Alberdan C.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Moreira, Icaro Valgueiro M.; Meric, Ilker

    2015-01-01

    A non-invasive method which can be employed for elemental analysis is the Prompt-Gamma Neutron Activation Analysis. The aim is to estimate the mass fractions of the different constituent elements present in the unknown sample basing its estimations on the energies of all the photopeaks in their spectra. Two difficulties arise in this approach: the constituents are unknown, and the composed spectrum of the unknown sample is a nonlinear combination of the spectra of its constituents (which are called libraries). An iterative method that has become popular is the Monte Carlo Library Least Squares. One limitation with this method is that the amount of noise present in the spectra is not negligible, and the magnitude differences in the photon counting produce a bad conditioning in the covariance matrix employed by the least squares method, affecting the numerical stability of the method. A method for minimizing the numerical instability provoked by noisy spectra is proposed. Corresponding parts of different spectra are selected as to minimize the condition number of the resulting covariance matrix. This idea is supported by the assumption that the unknown spectrum is a linear combination of its constituent's spectra, and the fact that the amount of constituents is so small (typically ve of them). The selection of spectrum parts is done through Greedy Randomized Adaptive Search Procedures, where the cost function is the condition number that derives from the covariance matrix produced out of the selected parts. A QR factorization is also applied to the nal covariance matrix to reduce further its condition number, and transferring part of its bad conditioning to the basis conversion matrix. (author)

  3. Minimal conditions for the existence of a Hawking-like flux

    International Nuclear Information System (INIS)

    Barcelo, Carlos; Liberati, Stefano; Sonego, Sebastiano; Visser, Matt

    2011-01-01

    We investigate the minimal conditions that an asymptotically flat general relativistic spacetime must satisfy in order for a Hawking-like Planckian flux of particles to arrive at future null infinity. We demonstrate that there is no requirement that any sort of horizon form anywhere in the spacetime. We find that the irreducible core requirement is encoded in an approximately exponential 'peeling' relationship between affine coordinates on past and future null infinity. As long as a suitable adiabaticity condition holds, then a Planck-distributed Hawking-like flux will arrive at future null infinity with temperature determined by the e-folding properties of the outgoing null geodesics. The temperature of the Hawking-like flux can slowly evolve as a function of time. We also show that the notion of peeling of null geodesics is distinct from the usual notion of 'inaffinity' used in Hawking's definition of surface gravity.

  4. Minimally Disruptive Medicine: A Pragmatically Comprehensive Model for Delivering Care to Patients with Multiple Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Aaron L. Leppin

    2015-01-01

    Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.

  5. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  6. Conditional long-term survival following minimally invasive robotic mitral valve repair: a health services perspective.

    Science.gov (United States)

    Efird, Jimmy T; Griffin, William F; Gudimella, Preeti; O'Neal, Wesley T; Davies, Stephen W; Crane, Patricia B; Anderson, Ethan J; Kindell, Linda C; Landrine, Hope; O'Neal, Jason B; Alwair, Hazaim; Kypson, Alan P; Nifong, Wiley L; Chitwood, W Randolph

    2015-09-01

    Conditional survival is defined as the probability of surviving an additional number of years beyond that already survived. The aim of this study was to compute conditional survival in patients who received a robotically assisted, minimally invasive mitral valve repair procedure (RMVP). Patients who received RMVP with annuloplasty band from May 2000 through April 2011 were included. A 5- and 10-year conditional survival model was computed using a multivariable product-limit method. Non-smoking men (≤65 years) who presented in sinus rhythm had a 96% probability of surviving at least 10 years if they survived their first year following surgery. In contrast, recent female smokers (>65 years) with preoperative atrial fibrillation only had an 11% probability of surviving beyond 10 years if alive after one year post-surgery. In the context of an increasingly managed healthcare environment, conditional survival provides useful information for patients needing to make important treatment decisions, physicians seeking to select patients most likely to benefit long-term following RMVP, and hospital administrators needing to comparatively assess the life-course economic value of high-tech surgical procedures.

  7. Proposed minimal diagnostic criteria for myelodysplastic syndromes (MDS) and potential pre-MDS conditions.

    Science.gov (United States)

    Valent, Peter; Orazi, Attilio; Steensma, David P; Ebert, Benjamin L; Haase, Detlef; Malcovati, Luca; van de Loosdrecht, Arjan A; Haferlach, Torsten; Westers, Theresia M; Wells, Denise A; Giagounidis, Aristoteles; Loken, Michael; Orfao, Alberto; Lübbert, Michael; Ganser, Arnold; Hofmann, Wolf-Karsten; Ogata, Kiyoyuki; Schanz, Julie; Béné, Marie C; Hoermann, Gregor; Sperr, Wolfgang R; Sotlar, Karl; Bettelheim, Peter; Stauder, Reinhard; Pfeilstöcker, Michael; Horny, Hans-Peter; Germing, Ulrich; Greenberg, Peter; Bennett, John M

    2017-09-26

    Myelodysplastic syndromes (MDS) comprise a heterogeneous group of myeloid neoplasms characterized by peripheral cytopenia, dysplasia, and a variable clinical course with about 30% risk to transform to secondary acute myeloid leukemia (AML). In the past 15 years, diagnostic evaluations, prognostication, and treatment of MDS have improved substantially. However, with the discovery of molecular markers and advent of novel targeted therapies, new challenges have emerged in the complex field of MDS. For example, MDS-related molecular lesions may be detectable in healthy individuals and increase in prevalence with age. Other patients exhibit persistent cytopenia of unknown etiology without dysplasia. Although these conditions are potential pre-phases of MDS they may also transform into other bone marrow neoplasms. Recently identified molecular, cytogenetic, and flow-based parameters may add in the delineation and prognostication of these conditions. However, no generally accepted integrated classification and no related criteria are as yet available. In an attempt to address this challenge, an international consensus group discussed these issues in a working conference in July 2016. The outcomes of this conference are summarized in the present article which includes criteria and a proposal for the classification of pre-MDS conditions as well as updated minimal diagnostic criteria of MDS. Moreover, we propose diagnostic standards to delineate between ´normal´, pre-MDS, and MDS. These standards and criteria should facilitate diagnostic and prognostic evaluations in clinical studies as well as in clinical practice.

  8. Buoyancy-driven mean flow in a long channel with a hydraulically constrained exit condition

    Science.gov (United States)

    Grimm, Th.; Maxworthy, T.

    1999-11-01

    Convection plays a major role in a variety of natural hydrodynamic systems. Those in which convection drives exchange flows through a lateral contraction and/or over a sill form a special class with typical examples being the Red and Mediterranean Seas, the Persian Gulf, and the fjords that indent many coastlines. The present work focuses on the spatial distribution and scaling of the density difference between the inflowing and outflowing fluid layers. Using a long water-filled channel, fitted with buoyancy sources at its upper surface, experiments were conducted to investigate the influence of the geometry of the strait and the channel as well as the magnitude of the buoyancy flux. Two different scaling laws, one by Phillips (1966), and one by Maxworthy (1994, 1997) were compared with the experimental results. It has been shown that a scaling law for which g[prime prime or minute] = kB02/3x/h4/3 best describes the distribution of the observed density difference along the channel, where B0 is the buoyancy flux, x the distance from the closed end of the channel, h its height at the open end (sill) and k a constant that depends on the details of the channel geometry and flow conditions. This result holds for the experimental results and appears to be valid for a number of natural systems as well.

  9. Constraining the thermal conditions of impact environments through integrated low-temperature thermochronometry and numerical modeling

    Science.gov (United States)

    Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.

    2017-12-01

    Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly

  10. Topologically protected qubits as minimal Josephson junction arrays with non-trivial boundary conditions: A proposal

    Energy Technology Data Exchange (ETDEWEB)

    Cristofano, Gerardo; Marotta, Vincenzo [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , and INFN, Sezione di Napoli, Via Cintia, Complesso Universitario M. Sant' Angelo, 80126 Napoli (Italy); Naddeo, Adele [Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno and CNISM, Unita di Ricerca di Salerno, Via Salvador Allende, 84081 Baronissi (Italy)], E-mail: naddeo@sa.infn.it; Niccoli, Giuliano [Theoretical Physics Group, DESY, NotkeStrasse 85, 22603 Hamburg (Germany)

    2008-11-17

    Recently a one-dimensional closed ladder of Josephson junctions has been studied [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Phys. Lett. A 372 (2008) 2464] within a twisted conformal field theory (CFT) approach [G. Cristofano, G. Maiella, V. Marotta, Mod. Phys. Lett. A 15 (2000) 1679; G. Cristofano, G. Maiella, V. Marotta, G. Niccoli, Nucl. Phys. B 641 (2002) 547] and shown to develop the phenomenon of flux fractionalization [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Eur. Phys. J. B 49 (2006) 83]. That led us to predict the emergence of a topological order in such a system [G. Cristofano, V. Marotta, A. Naddeo, J. Stat. Mech.: Theory Exp. (2005) P03006]. In this Letter we analyze the ground states and the topological properties of fully frustrated Josephson junction arrays (JJA) arranged in a Corbino disk geometry for a variety of boundary conditions. In particular minimal configurations of fully frustrated JJA are considered and shown to exhibit the properties needed in order to build up a solid state qubit, protected from decoherence. The stability and transformation properties of the ground states of the JJA under adiabatic magnetic flux changes are analyzed in detail in order to provide a tool for the manipulation of the proposed qubit.

  11. Florida Red Tide and Human Health: A Pilot Beach Conditions Reporting System to Minimize Human Exposure

    Science.gov (United States)

    Kirkpatrick, Barbara; Currier, Robert; Nierenberg, Kate; Reich, Andrew; Backer, Lorraine C.; Stumpf, Richard; Fleming, Lora; Kirkpatrick, Gary

    2008-01-01

    With over 50% of the US population living in coastal counties, the ocean and coastal environments have substantial impacts on coastal communities. While may of the impacts are positive, such as tourism and recreation opportunities, there are also negative impacts, such as exposure to harmful algal blooms (HABs) and water borne pathogens. Recent advances in environmental monitoring and weather prediction may allow us to forecast these potential adverse effects and thus mitigate the negative impact from coastal environmental threats. One example of the need to mitigate adverse environmental impacts occurs on Florida’s west coast, which experiences annual blooms, or periods of exuberant growth, of the toxic dinoflagellate, Karenia brevis. K. brevis produces a suite of potent neurotoxins called brevetoxins. Wind and wave action can break up the cells, releasing toxin that can then become part of the marine aerosol or sea spray. Brevetoxins in the aerosol cause respiratory irritation in people who inhale it. In addition, asthmatics who inhale the toxins report increase upper and lower airway lower symptoms and experience measurable changes in pulmonary function. Real-time reporting of the presence or absence of these toxic aerosols will allow asthmatics and local coastal residents to make informed decisions about their personal exposures, thus adding to their quality of life. A system to protect public health that combines information collected by an Integrated Ocean Observing System (IOOS) has been designed and implemented in Sarasota and Manatee Counties, Florida. This system is based on real-time reports from lifeguards at the eight public beaches. The lifeguards provide periodic subjective reports of the amount of dead fish on the beach, apparent level of respiratory irritation among beach-goers, water color, wind direction, surf condition, and the beach warning flag they are flying. A key component in the design of the observing system was an easy reporting

  12. Influence of boundary conditions on the existence and stability of minimal surfaces of revolution made of soap films

    Science.gov (United States)

    Salkin, Louis; Schmit, Alexandre; Panizza, Pascal; Courbin, Laurent

    2014-09-01

    Because of surface tension, soap films seek the shape that minimizes their surface energy and thus their surface area. This mathematical postulate allows one to predict the existence and stability of simple minimal surfaces. After briefly recalling classical results obtained in the case of symmetric catenoids that span two circular rings with the same radius, we discuss the role of boundary conditions on such shapes, working with two rings having different radii. We then investigate the conditions of existence and stability of other shapes that include two portions of catenoids connected by a planar soap film and half-symmetric catenoids for which we introduce a method of observation. We report a variety of experimental results including metastability—an hysteretic evolution of the shape taken by a soap film—explained using simple physical arguments. Working by analogy with the theory of phase transitions, we conclude by discussing universal behaviors of the studied minimal surfaces in the vicinity of their existence thresholds.

  13. Characterization of inclusions in terrestrial impact formed zircon: Constraining the formation conditions of Hadean zircon from Jack Hills, Western Australia

    Science.gov (United States)

    Faltys, J. P.; Wielicki, M. M.; Sizemore, T. M.

    2017-12-01

    , associated with impact formed zircon; however, if certain populations of the Jack Hills record appear to share inclusion assemblages with impact formed zircon, this could provide a tool to constrain the frequency and timing of large impactors on early Earth and their possible effects on conditions conducive for the origin of life.

  14. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  15. Optimization of the conditions for the precipitation of thorium oxalate. II. Minimization of the product losses

    International Nuclear Information System (INIS)

    Pazukhin, E.M.; Smirnova, E.A.; Krivokhatskii, A.S.; Pazukhina, Yu.L.; Kiselev, P.P.

    1987-01-01

    The precipitation of thorium as a poorly soluble oxalate was investigated. An equation relating the concentrations of the metal and nitric acid in the initial solution and the amount of precipitant required to minimize the product losses was derived. A graphical solution of the equation is presented for the case where the precipitant is oxalic acid at a concentration of 0.78 M

  16. Evolution of quality characteristics of minimally processed asparagus during storage in different lighting conditions.

    Science.gov (United States)

    Sanz, S; Olarte, C; Ayala, F; Echávarri, J F

    2009-08-01

    The effect of different types of lighting (white, green, red, and blue light) on minimally processed asparagus during storage at 4 degrees C was studied. The gas concentrations in the packages, pH, mesophilic counts, and weight loss were also determined. Lighting caused an increase in physiological activity. Asparagus stored under lighting achieved atmospheres with higher CO(2) and lower O(2) content than samples kept in the dark. This activity increase explains the greater deterioration experienced by samples stored under lighting, which clearly affected texture and especially color, accelerating the appearance of greenish hues in the tips and reddish-brown hues in the spears. Exposure to light had a negative effect on the quality parameters of the asparagus and it caused a significant reduction in shelf life. Hence, the 11 d shelf life of samples kept in the dark was reduced to only 3 d in samples kept under red and green light, and to 7 d in those kept under white and blue light. However, quality indicators such as the color of the tips and texture showed significantly better behavior under blue light than with white light, which allows us to state that it is better to use this type of light or blue-tinted packaging film for the display of minimally processed asparagus to consumers.

  17. CAROTENOID RETENTION IN MINIMALLY PROCESSED BIOFORTIFIED GREEN CORN STORED UNDER RETAIL MARKETING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Natália Alves Barbosa

    2015-08-01

    Full Text Available Storing processed food products can cause alterations in their chemical compositions. Thus, the objective of this study was to evaluate carotenoid retention in the kernels of minimally processed normal and vitamin A precursor (proVA-biofortified green corn ears that were packaged in polystyrene trays covered with commercial film or in multilayered polynylon packaging material and were stored. Throughout the storage period, the carotenoids were extracted from the corn kernels using organic solvents and were quantified using HPLC. A completely factorial design including three factors (cultivar, packaging and storage period was applied for analysis. The green kernels of maize cultivars BRS1030 and BRS4104 exhibited similar carotenoid profiles, with zeaxanthin being the main carotenoid. Higher concentrations of the carotenoids lutein, β-cryptoxanthin, and β-carotene, the total carotenoids and the total vitamin A precursor carotenoids were detected in the green kernels of the biofortified BRS4104 maize. The packaging method did not affect carotenoid retention in the kernels of minimally processed green corn ears during the storage period.

  18. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  19. Effects of common chronic medical conditions on psychometric tests used to diagnose minimal hepatic encephalopathy

    DEFF Research Database (Denmark)

    Lauridsen, M M; Poulsen, L; Rasmussen, C K

    2016-01-01

    Many chronic medical conditions are accompanied by cognitive disturbances but these have only to a very limited extent been psychometrically quantified. An exception is liver cirrhosis where hepatic encephalopathy is an inherent risk and mild forms are diagnosed by psychometric tests. The preferred...... diagnostic test battery in cirrhosis is often the Continuous Reaction Time (CRT) and the Portosystemic Encephalopathy (PSE) tests but the effect on these of other medical conditions is not known. We aimed to examine the effects of common chronic (non-cirrhosis) medical conditions on the CRT and PSE tests. We...

  20. Prediction of sonic flow conditions at drill bit nozzles to minimize complications in UBD

    Energy Technology Data Exchange (ETDEWEB)

    Guo, B.; Ghalambor, A. [Louisiana Univ., Lafayette, LA (United States); Al-Bemani, A.S. [Sultan Qaboos Univ. (Oman)

    2002-06-01

    Sonic flow at drill bit nozzles can complicate underbalanced drilling (UBD) operations, and should be considered when choosing bit nozzles and fluid injection rates. The complications stem from pressure discontinuity and temperature drop at the nozzle. UBD refers to drilling operations where the drilling fluid pressures in the borehole are maintained at less than the pore pressure in the formation rock in the open-hole section. UBD has become a popular drilling method because it offers minimal lost circulation and reduces formation damage. This paper presents an analytical model for calculating the critical pressure ratio where two-phase sonic flow occurs. In particular, it describes how Sachdeva's two-phase choke model can be used to estimate the critical pressure ratio at nozzles that cause sonic flow. The critical pressure ratio charts can be coded in spreadsheets. The critical pressure ratio depends on the in-situ volumetric gas content, or gas-liquid ratio, which depends on gas injection and pressure. 6 refs., 2 tabs., 5 figs.

  1. Conditions to minimize soft single biomolecule deformation when imaging with atomic force microscopy.

    Science.gov (United States)

    Godon, Christian; Teulon, Jean-Marie; Odorico, Michael; Basset, Christian; Meillan, Matthieu; Vellutini, Luc; Chen, Shu-Wen W; Pellequer, Jean-Luc

    2017-03-01

    A recurrent interrogation when imaging soft biomolecules using atomic force microscopy (AFM) is the putative deformation of molecules leading to a bias in recording true topographical surfaces. Deformation of biomolecules comes from three sources: sample instability, adsorption to the imaging substrate, and crushing under tip pressure. To disentangle these causes, we measured the maximum height of a well-known biomolecule, the tobacco mosaic virus (TMV), under eight different experimental conditions positing that the maximum height value is a specific indicator of sample deformations. Six basic AFM experimental factors were tested: imaging in air (AIR) versus in liquid (LIQ), imaging with flat minerals (MICA) versus flat organic surfaces (self-assembled monolayers, SAM), and imaging forces with oscillating tapping mode (TAP) versus PeakForce tapping (PFT). The results show that the most critical parameter in accurately measuring the height of TMV in air is the substrate. In a liquid environment, regardless of the substrate, the most critical parameter is the imaging mode. Most importantly, the expected TMV height values were obtained with both imaging with the PeakForce tapping mode either in liquid or in air at the condition of using self-assembled monolayers as substrate. This study unambiguously explains previous poor results of imaging biomolecules on mica in air and suggests alternative methodologies for depositing soft biomolecules on well organized self-assembled monolayers. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Mental skills training effectively minimizes operative performance deterioration under stressful conditions: Results of a randomized controlled study.

    Science.gov (United States)

    Anton, N E; Beane, J; Yurco, A M; Howley, L D; Bean, E; Myers, E M; Stefanidis, D

    2018-02-01

    Stress can negatively impact surgical performance, but mental skills may help. We hypothesized that a comprehensive mental skills curriculum (MSC) would minimize resident performance deterioration under stress. Twenty-four residents were stratified then randomized to receive mental skills and FLS training (MSC group), or only FLS training (control group). Laparoscopic suturing skill was assessed on a live porcine model with and without external stressors. Outcomes were compared with t-tests. Twenty-three residents completed the study. The groups were similar at baseline. There were no differences in suturing at posttest or transfer test under normal conditions. Both groups experienced significantly decreased performance when stress was applied, but the MSC group significantly outperformed controls under stress. This MSC enabled residents to perform significantly better than controls in the simulated OR under unexpected stressful conditions. These findings support the use of psychological skills as an integral part of a surgical resident training. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  4. A Simply Constrained Optimization Reformulation of KKT Systems Arising from Variational Inequalities

    International Nuclear Information System (INIS)

    Facchinei, F.; Fischer, A.; Kanzow, C.; Peng, J.-M.

    1999-01-01

    The Karush-Kuhn-Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties

  5. Operant Conditioning: A Minimal Components Requirement in Artificial Spiking Neurons Designed for Bio-Inspired Robot’s Controller

    Directory of Open Access Journals (Sweden)

    André eCyr

    2014-07-01

    Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.

  6. Second-Order Necessary Optimality Conditions for Some State-Constrained Control Problems of Semilinear Elliptic Equations

    International Nuclear Information System (INIS)

    Casas, E.; Troeltzsch, F.

    1999-01-01

    In this paper we are concerned with some optimal control problems governed by semilinear elliptic equations. The case of a boundary control is studied. We consider pointwise constraints on the control and a finite number of equality and inequality constraints on the state. The goal is to derive first- and second-order optimality conditions satisfied by locally optimal solutions of the problem

  7. Optimization of Vacuum Impregnation with Calcium Lactate of Minimally Processed Melon and Shelf-Life Study in Real Storage Conditions.

    Science.gov (United States)

    Tappi, Silvia; Tylewicz, Urszula; Romani, Santina; Siroli, Lorenzo; Patrignani, Francesca; Dalla Rosa, Marco; Rocculi, Pietro

    2016-10-05

    Vacuum impregnation (VI) is a processing operation that permits the impregnation of fruit and vegetable porous tissues with a fast and more homogeneous penetration of active compounds compared to the classical diffusion processes. The objective of this research was to investigate the impact on VI treatment with the addition of calcium lactate on qualitative parameters of minimally processed melon during storage. For this aim, this work was divided in 2 parts. Initially, the optimization of process parameters was carried out in order to choose the optimal VI conditions for improving texture characteristics of minimally processed melon that were then used to impregnate melons for a shelf-life study in real storage conditions. On the basis of a 2 3 factorial design, the effect of Calcium lactate (CaLac) concentration between 0% and 5% and of minimum pressure (P) between 20 and 60 MPa were evaluated on color and texture. Processing parameters corresponding to 5% CaLac concentration and 60 MPa of minimum pressure were chosen for the storage study, during which the modifications of main qualitative parameters were evaluated. Despite of the high variability of the raw material, results showed that VI allowed a better maintenance of texture during storage. Nevertheless, other quality traits were negatively affected by the application of vacuum. Impregnated products showed a darker and more translucent appearance on the account of the alteration of the structural properties. Moreover microbial shelf-life was reduced to 4 d compared to the 7 obtained for control and dipped samples. © 2016 Institute of Food Technologists®.

  8. THE PREDICTION OF pH BY GIBBS FREE ENERGY MINIMIZATION IN THE SUMP SOLUTION UNDER LOCA CONDITION OF PWR

    Directory of Open Access Journals (Sweden)

    HYOUNGJU YOON

    2013-02-01

    Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.

  9. Female infidelity is constrained by El Niño conditions in a long-lived bird.

    Science.gov (United States)

    Kiere, Lynna Marie; Drummond, Hugh

    2016-07-01

    Explaining the remarkable variation in socially monogamous females' extrapair (EP) behaviour revealed by decades of molecular paternity testing remains an important challenge. One hypothesis proposes that restrictive environmental conditions (e.g. extreme weather, food scarcity) limit females' resources and increase EP behaviour costs, forcing females to reduce EP reproductive behaviours. For the first time, we tested this hypothesis by directly quantifying within-pair and EP behaviours rather than inferring behaviour from paternity. We evaluated whether warmer sea surface temperatures depress total pre-laying reproductive behaviours, and particularly EP behaviours, in socially paired female blue-footed boobies (Sula nebouxii). Warm waters in the Eastern Pacific are associated with El Niño Southern Oscillation and lead to decreased food availability and reproductive success in this and other marine predators. With warmer waters, females decreased their neighbourhood attendance, total copulation frequency and laying probability, suggesting that they contend with restricted resources by prioritizing self-maintenance and committing less to reproduction, sometimes abandoning the attempt altogether. Females were also less likely to participate in EP courtship and copulations, but when they did, rates of these behaviours were unaffected by water temperature. Females' neighbourhood attendance, total copulation frequency and EP courtship probability responded to temperature differences at the between-season scale, and neighbourhood attendance and EP copulation probability were affected by within-season fluctuations. Path analysis indicated that decreased EP participation was not attributable to reduced female time available for EP activities. Together, our results suggest that immediate time and energy constraints were not the main factors limiting females' infidelity. Our study shows that El Niño conditions depress female boobies' EP participation and total reproductive

  10. Stochastic risk-averse coordinated scheduling of grid integrated energy storage units in transmission constrained wind-thermal systems within a conditional value-at-risk framework

    International Nuclear Information System (INIS)

    Hemmati, Reza; Saboori, Hedayat; Saboori, Saeid

    2016-01-01

    In recent decades, wind power resources have been integrated in the power systems increasingly. Besides confirmed benefits, utilization of large share of this volatile source in power generation portfolio has been faced system operators with new challenges in terms of uncertainty management. It is proved that energy storage systems are capable to handle projected uncertainty concerns. Risk-neutral methods have been proposed in the previous literature to schedule storage units considering wind resources uncertainty. Ignoring risk of the cost distributions with non-desirable properties may result in experiencing high costs in some unfavorable scenarios with high probability. In order to control the risk of the operator decisions, this paper proposes a new risk-constrained two-stage stochastic programming model to make optimal decisions on energy storage and thermal units in a transmission constrained hybrid wind-thermal power system. Risk-aversion procedure is explicitly formulated using the conditional value-at-risk measure, because of possessing distinguished features compared to the other risk measures. The proposed model is a mixed integer linear programming considering transmission network, thermal unit dynamics, and storage devices constraints. The simulations results demonstrate that taking the risk of the problem into account will affect scheduling decisions considerably depend on the level of the risk-aversion. - Highlights: • Risk of the operation decisions is handled by using risk-averse programming. • Conditional value-at-risk is used as risk measure. • Optimal risk level is obtained based on the cost/benefit analysis. • The proposed model is a two-stage stochastic mixed integer linear programming. • The unit commitment is integrated with ESSs and wind power penetration.

  11. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    Science.gov (United States)

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  13. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Directory of Open Access Journals (Sweden)

    Thadeous J Kacmarczyk

    Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  14. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    Science.gov (United States)

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  15. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Science.gov (United States)

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  16. How word-beginnings constrain the pronunciations of word-ends in the reading aloud of English: the phenomena of head- and onset-conditioning

    Directory of Open Access Journals (Sweden)

    Anastasia Ulicheva

    2015-12-01

    Full Text Available Background. A word whose body is pronounced in different ways in different words is body-inconsistent. When we take the unit that precedes the vowel into account for the calculation of body-consistency, the proportion of English words that are body-inconsistent is considerably reduced at the level of corpus analysis, prompting the question of whether humans actually use such head/onset-conditioning when they read.Methods. Four metrics for head/onset-constrained body-consistency were calculated: by the last grapheme of the head, by the last phoneme of the onset, by place and manner of articulation of the last phoneme of the onset, and by manner of articulation of the last phoneme of the onset. Since these were highly correlated, principal component analysis was performed on them.Results. Two out of four resulting principal components explained significant variance in the reading-aloud reaction times, beyond regularity and body-consistency.Discussion. Humans read head/onset-conditioned words faster than would be predicted based on their body-consistency and regularity only. We conclude that humans are sensitive to the dependency between word-beginnings and word-ends when they read aloud, and that this dependency is phonological in nature, rather than orthographic.

  17. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  18. Optimal replacement of residential air conditioning equipment to minimize energy, greenhouse gas emissions, and consumer cost in the US

    International Nuclear Information System (INIS)

    De Kleine, Robert D.; Keoleian, Gregory A.; Kelly, Jarod C.

    2011-01-01

    A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: → Optimal replacement schedules for residential central air conditioners were found. → Minimizing energy required more frequent replacement than minimizing consumer cost. → Significant variation in optimal replacement was observed for Michigan and Texas. → Rebates for altering replacement patterns are not cost effective for GHG abatement. → Maintenance levels were significant in determining the energy and GHG impacts.

  19. Power Conditioning And Distribution Units For 50V Platforms A Flexible And Modular Concept Allowing To Deal With Time Constraining Programs

    Science.gov (United States)

    Lempereur, V.; Liegeois, B.; Deplus, N.

    2011-10-01

    In the frame of its Power Conditioning and Distribution Unit (PCDU) Medium power product family, Thales Alenia space ETCA is currently developing Power Conditioning Unit (PCU) and PCDU products for 50V platforms applications. These developments are performed in very schedule constraining programs. This challenge can be met thanks to the modular PCDU concept allowing to share a common heritage at mechanical & thermal points of view as well as at electrical functions level. First Medium power PCDU application has been developed for Herschel-Planck PCDU and re-used in several other missions (e.g. GlobalStar2 PCDU for which we are producing more than 26 units). Based on this heritage, a development plan based on Electrical Model (EM) (avoiding Electrical Qualification Model - EQM) can be proposed when the mechanical qualification of the concept covers the environment required in new projects. This first heritage level allows reducing development schedule and activities. In addition, development is also optimized thanks to the re-use of functions designed and qualified in Herschel- PlanckPCDU. This coversinternal TM/TC management inside PCDU based on a centralized scheduler and an internal high speed serial bus. Finally, thanks to common architecture of several 50V platforms based on full regulated bus, S3R (Sequential Shunt Switch Regulator) concept and one (or two) Li- Ion battery(ies), a common PCU/PCDU architecture has allowed the development of modules or functions that are used in several applications. These achievements are discussed with particular emphasis on PCDU architecture trade-offs allowing flexibility of proposed technical solutions (w.r.t. mono/bi-battery configurations, SA inner capacitance value, output power needs...). Pro's and con's of sharing concepts and designs between several applications on 50V platforms are also be discussed.

  20. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  1. Experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO2 lasers under conditions of minimal roughness

    International Nuclear Information System (INIS)

    Golyshev, A A; Malikov, A G; Orishich, A M; Shulyatyev, V B

    2014-01-01

    The results of an experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO 2 lasers are generalised. The dependence of roughness of the cut surface on the cutting parameters is investigated, and the conditions under which the surface roughness is minimal are formulated. It is shown that for both types of lasers these conditions can be expressed in the same way in terms of the dimensionless variables – the Péclet number Pe and the output power Q of laser radiation per unit thickness of the cut sheet – and take the form of the similarity laws: Pe = const, Q = const. The optimal values of Pe and Q are found. We have derived empirical expressions that relate the laser power and cutting speed with the thickness of the cut sheet under the condition of minimal roughness in the case of cutting by means of radiation from fibre and CO 2 lasers. (laser technologies)

  2. Preliminary design of the internal geometry in a minimally invasive left ventricular assist device under pulsatile-flow conditions.

    Science.gov (United States)

    Smith, P Alex; Wang, Yaxin; Metcalfe, Ralph W; Sampaio, Luiz C; Timms, Daniel L; Cohn, William E; Frazier, O H

    2018-03-01

    A minimally invasive, partial-assist, intra-atrial blood pump has been proposed, which would unload the left ventricle with a flow path from the left atrium to the arterial system. Flow modulation is a common strategy for ensuring washout in the pump, but it can increase power consumption because it is typically achieved through motor-speed variation. However, if a pump's performance curve had the proper gradient, flow modulation could be realized passively. To achieve this goal, we propose a pump performance operating curve as an alternative to the more standard operating point. Mean-line theory was employed to generate an initial set of geometries that were then tested on a hydraulic test rig at ~20,000 r/min. Experimental results show that the intra-atrial blood pump performed below the operating region; however, it was determined that smaller hub diameter and longer chord length bring the performance of the intra-atrial blood pump device closer to the operating curve. We found that it is possible to shape the pump performance curve for specifically targeted gradients over the operating region through geometric variations inside the pump.

  3. A highly stable minimally processed plant-derived recombinant acetylcholinesterase for nerve agent detection in adverse conditions.

    Science.gov (United States)

    Rosenberg, Yvonne J; Walker, Jeremy; Jiang, Xiaoming; Donahue, Scott; Robosky, Jason; Sack, Markus; Lees, Jonathan; Urban, Lori

    2015-08-13

    Although recent innovations in transient plant systems have enabled gram quantities of proteins in 1-2 weeks, very few have been translated into applications due to technical challenges and high downstream processing costs. Here we report high-level production, using a Nicotiana benthamiana/p19 system, of an engineered recombinant human acetylcholinesterase (rAChE) that is highly stable in a minimally processed leaf extract. Lyophylized clarified extracts withstand prolonged storage at 70 °C and, upon reconstitution, can be used in several devices to detect organophosphate (OP) nerve agents and pesticides on surfaces ranging from 0 °C to 50 °C. The recent use of sarin in Syria highlights the urgent need for nerve agent detection and countermeasures necessary for preparedness and emergency responses. Bypassing cumbersome and expensive downstream processes has enabled us to fully exploit the speed, low cost and scalability of transient production systems resulting in the first successful implementation of plant-produced rAChE into a commercial biotechnology product.

  4. Fear conditioning and shock intensity: the choice between minimizing the stress induced and reducing the number of animals used

    NARCIS (Netherlands)

    Pietersen, C.Y.; Bosker, F.J; Posterna, F.; Den Boer, J.A.

    2006-01-01

    Many fear conditioning studies use electric shock as the aversive stimulus. The intensity of shocks varies throughout the literature. In this study, shock intensities ranging from 0 to 1.5 mA were used, and the effects on the rats assessed by both behavioural and biochemical stress parameters.

  5. Fear conditioning and shock intensity : the choice between minimizing the stress induced and reducing the number of animals used

    NARCIS (Netherlands)

    Pietersen, CY; Bosker, FJ; Posterna, F; den Boer, JA

    Many fear conditioning studies use electric shock as the aversive stimulus. The intensity of shocks varies throughout the literature. In this study, shock intensities ranging from 0 to 1.5 mA were used, and the effects on the rats assessed by both behavioural and biochemical stress parameters.

  6. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  7. Establishment of minimal positive-control conditions to ensure brain safety during rapid development of emergency vaccines.

    Science.gov (United States)

    Baek, Hyekyung; Kim, Kwang Ho; Park, Min Young; Kim, Kyeongryun; Ko, Bokyeong; Seo, Hyung Seok; Kim, Byoung Soo; Hahn, Tae-Wook; Yi, Sun Shin

    2017-08-31

    With the increase in international human and material exchanges, contagious and infectious epidemics are occurring. One of the effective methods of epidemic inhibition is the rapid development and supply of vaccines. Considering the safety of the brain during vaccine development is very important. However, manuals for brain safety assays for new vaccines are not uniform or effective globally. Therefore, the aim of this study is to establish a positive-control protocol for an effective brain safety test to enhance rapid vaccine development. The blood-brain barrier's tight junctions provide selective defense of the brain; however, it is possible to destroy these important microstructures by administering lipopolysaccharides (LPSs), thereby artificially increasing the permeability of brain parenchyma. In this study, test conditions are established so that the degree of brain penetration or brain destruction of newly developed vaccines can be quantitatively identified. The most effective conditions were suggested by measuring time-dependent expressions of tight junction biomarkers (zonula occludens-1 [ZO-1] and occludin) in two types of mice (C57BL/6 and ICR) following exposure to two types of LPS ( Salmonella and Escherichia ). In the future, we hope that use of the developed positive-control protocol will help speed up the determination of brain safety of novel vaccines.

  8. Experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO{sub 2} lasers under conditions of minimal roughness

    Energy Technology Data Exchange (ETDEWEB)

    Golyshev, A A; Malikov, A G; Orishich, A M; Shulyatyev, V B [S.A. Khristianovich Institute of Theoretical and Applied Mechanics, Siberian Branch, Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2014-10-31

    The results of an experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO{sub 2} lasers are generalised. The dependence of roughness of the cut surface on the cutting parameters is investigated, and the conditions under which the surface roughness is minimal are formulated. It is shown that for both types of lasers these conditions can be expressed in the same way in terms of the dimensionless variables – the Péclet number Pe and the output power Q of laser radiation per unit thickness of the cut sheet – and take the form of the similarity laws: Pe = const, Q = const. The optimal values of Pe and Q are found. We have derived empirical expressions that relate the laser power and cutting speed with the thickness of the cut sheet under the condition of minimal roughness in the case of cutting by means of radiation from fibre and CO{sub 2} lasers. (laser technologies)

  9. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  10. Environmental Conditions Constrain the Distribution and Diversity of Archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    Science.gov (United States)

    Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.

    2011-01-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.

  11. Environmental conditions constrain the distribution and diversity of archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    Science.gov (United States)

    Wang, Yanping; Boyd, Eric; Crane, Sharron; Lu-Irving, Patricia; Krabbenhoft, David; King, Susan; Dighton, John; Geesey, Gill; Barkay, Tamar

    2011-11-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient.

  12. Preservation or Restoration of Segmental and Regional Spinal Lordosis Using Minimally Invasive Interbody Fusion Techniques in Degenerative Lumbar Conditions: A Literature Review.

    Science.gov (United States)

    Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A

    2016-04-01

    A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained

  13. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  14. Constraining the Depth of a Martian Magma Ocean through Metal-Silicate Partitioning Experiments: The Role of Different Datasets and the Range of Pressure and Temperature Conditions

    Science.gov (United States)

    Righter, K.; Chabot, N.L.

    2009-01-01

    Mars accretion is known to be fast compared to Earth. Basaltic samples provide a probe into the interior and allow reconstruction of siderophile element contents of the mantle. These estimates can be used to estimate conditions of core formation, as for Earth. Although many assume that Mars went through a magma ocean stage, and possibly even complete melting, the siderophile element content of Mars mantle is consistent with relatively low pressure and temperature (PT) conditions, implying only shallow melting, near 7 GPa and 2073 K. This is a pressure range where some have proposed a change in siderophile element partitioning behavior. We will examine the databases used for parameterization and split them into a low and higher pressure regime to see if the methods used to reach this conclusion agree for the two sets of data.

  15. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  16. Conditions for minimization of halo particle production during transverse compression of intense ion charge bunches in the Paul Trap Simulator Experiment (PTSX)

    International Nuclear Information System (INIS)

    Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Grote, David P.; Majeski, Richard; Startsev, Edward A.

    2007-01-01

    The Paul Trap Simulator Experiment (PTSX) is a compact laboratory Paul trap that simulates propagation of a long, thin charged-particle bunch coasting through a multi-kilometer-long magnetic alternating-gradient (AG) transport system by putting the physicist in the frame-of-reference of the beam. The transverse dynamics of particles in both systems are described by the same sets of equations-including all nonlinear space-charge effects. The time-dependent quadrupolar voltages applied to the PTSX confinement electrodes correspond to the axially dependent magnetic fields applied in the AG system. This paper presents the results of experiments in which the amplitude of the applied confining voltage is changed over the course of the experiment in order to transversely compress a beam with an initial depressed tune ν/ν 0 ∼0.9. Both instantaneous and smooth changes are considered. Particular emphasis is placed on determining the conditions that minimize the emittance growth and, generally, the number of particles that are found at large radius (so-called halo particles) after the beam compression. The experimental data are also compared with the results of particle-in-cell (PIC) simulations performed with the WARP code

  17. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  18. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  19. The three youngest Plinian eruptions of Mt Pelée, Martinique (P1, P2 and P3): Constraining the eruptive conditions from field and experimental studies.

    Science.gov (United States)

    Kueppers, Ulrich; Uhlig, Joan; Carazzo, Guillaume; Kaminski, Edouard; Perugini, Diego; Tait, Steve; Clouard, Valérie

    2015-04-01

    Mt Pelée on Martinique, French Lesser Indies, is infamous for the last big Pelean (i.e., dome forming) eruption in 1902 AD that destroyed agricultural land and the city of Saint Pierre by pyroclastic density currents. Beside such mostly valley-confined deposits, the geological record shows thick fall deposits of at least three Plinian eruptions during the past 2000 years. In an attempt to describe and understand systematic eruptive behaviours as well as the associated variability of eruptive scenarios of Plinian eruptions in Martinique, we have investigated approx. 50 outcrops belonging to the P1 (1315 AD), P2 (345 AD) and P3 (4 AD) eruptions (Traineau et al., JVGR 1989) and collected bulk samples as well as >100 mm pumiceous clasts. All samples are andesitic, contain plagioclase and pyroxene in a glassy matrix and range in porosity between 55 and 69 vol.% with individual bubbles rarely larger than 1 mm. Our approach was two-fold: 1) Loose bulk samples have been subject to dry mechanical sieving in order to quantively describe the grain-size distribution and the fractal dimension. 2) From large clasts, 60*25 mm cylinders have been drilled for fragmentation experiments following the sudden decompression of gas in the sample's pore space. The used experimental set-up allowed for precisely controllable and repeatable conditions (5, 10 and 15 MPa, 25 °C) and the complete sampling of the generated pyroclasts. These experimentally generated clasts were analysed for their grain-size distribution and fractal dimension. For both natural samples and experimental populations, we find we find that the grain-size distribution follows a power-law, with an exponent between 2,5 and 3,7. Deciphering eruption conditions from deposits alone is challenging because of the complex interplay of dynamic volcanic processes and transport-related sorting. We use the quantified values of fractal dimension for a comparison of the power law exponents among the three eruptions and the

  20. Reactive transport and mass balance modeling of the Stimson sedimentary formation and altered fracture zones constrain diagenetic conditions at Gale crater, Mars

    Science.gov (United States)

    Hausrath, E. M.; Ming, D. W.; Peretyazhko, T. S.; Rampe, E. B.

    2018-06-01

    On a planet as cold and dry as present-day Mars, evidence of multiple aqueous episodes offers an intriguing view into very different past environments. Fluvial, lacustrine, and eolian depositional environments are being investigated by the Mars Science Laboratory Curiosity in Gale crater, Mars. Geochemical and mineralogical observations of these sedimentary rocks suggest diagenetic processes affected the sediments. Here, we analyze diagenesis of the Stimson formation eolian parent material, which caused loss of olivine and formation of magnetite. Additional, later alteration in fracture zones resulted in preferential dissolution of pyroxene and precipitation of secondary amorphous silica and Ca sulfate. The ability to compare the unaltered parent material with the reacted material allows constraints to be placed on the characteristics of the altering solutions. In this work we use a combination of a mass balance approach calculating the fraction of a mobile element lost or gained, τ, with fundamental geochemical kinetics and thermodynamics in the reactive transport code CrunchFlow to examine the characteristics of multiple stages of aqueous alteration at Gale crater, Mars. Our model results indicate that early diagenesis of the Stimson sedimentary formation is consistent with leaching of an eolian deposit by a near-neutral solution, and that formation of the altered fracture zones is consistent with a very acidic, high sulfate solution containing Ca, P and Si. These results indicate a range of past aqueous conditions occurring at Gale crater, Mars, with important implications for past martian climate and environments.

  1. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  2. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  3. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  4. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  5. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  6. Minimal modification to tribimaximal mixing

    International Nuclear Information System (INIS)

    He Xiaogang; Zee, A.

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

  7. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  8. Fate of Escherichia coli O157:H7, Salmonella and Listeria innocua on minimally-processed peaches under different storage conditions.

    Science.gov (United States)

    Alegre, Isabel; Abadias, Maribel; Anguera, Marina; Usall, Josep; Viñas, Inmaculada

    2010-10-01

    Consumption of fresh-cut produce has sharply increased recently causing an increase of foodborne illnesses associated with these products. As generally, acidic fruits are considered 'safe' from a microbiological point of view, the aim of this work was to study the growth and survival of Escherichia coli O157:H7, Salmonella and Listeria innocua on minimally-processed peaches. The three foodborne pathogens population increased more than 2 log(10)units on fresh-cut peach when stored at 20 and 25 degrees C after 48 h. At 10 degrees C only L. innocua grew more than 1 log(10)unit and it was the only pathogen able to grow at 5 degrees C. Differences in growth occurred between different peach varieties tested, with higher population increases in those varieties with higher pH ('Royal Glory' 4.73+/-0.25 and 'Diana' 4.12+/-0.18). The use of common strategies on extending shelf life of fresh-cut produce, as modified atmosphere packaging and the use of the antioxidant substance, ascorbic acid (2%w/v), did not affect pathogens' growth at any of the temperatures tested (5 and 25 degrees C). Minimally-processed peaches have shown to be a good substrate for foodborne pathogens' growth regardless use of modified atmosphere and ascorbic acid. Therefore, maintaining cold chain and avoiding contamination is highly necessary. 2010 Elsevier Ltd. All rights reserved.

  9. Minimal mirror twin Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

  10. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  11. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  12. Early cosmology constrained

    Energy Technology Data Exchange (ETDEWEB)

    Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.

  13. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  14. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  15. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  16. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  17. Note on constrained cohomology

    International Nuclear Information System (INIS)

    Delduc, F.; Maggiore, N.; Piguet, O.; Wolf, S.

    1996-08-01

    The cohomology of the BRS operator corresponding to a group of rigid symmetries is studied in a space of local field functionals subjected to a condition of gauge invariance. We propose a procedure based on a filtration operator counting the degree in the infinitesimal parameters of the rigid symmetry transformations. An application to Witten's topological Yang-Mills theory is given. (author). 18 refs

  18. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  19. Note on constrained cohomology

    Energy Technology Data Exchange (ETDEWEB)

    Delduc, F.; Maggiore, N.; Piguet, O.; Wolf, S.

    1996-08-01

    The cohomology of the BRS operator corresponding to a group of rigid symmetries is studied in a space of local field functionals subjected to a condition of gauge invariance. We propose a procedure based on a filtration operator counting the degree in the infinitesimal parameters of the rigid symmetry transformations. An application to Witten`s topological Yang-Mills theory is given. (author). 18 refs.

  20. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  1. The establishment of an in vitro gene bank in Dianthus spiculifolius Schur and D. glacialis ssp. gelidus (Schott Nym. et Kotschy Tutin: I. The initiation of a tissue collection and the characterization of the cultures in minimal growth conditions

    Directory of Open Access Journals (Sweden)

    Mihaela Holobiuc

    2009-12-01

    Full Text Available In the last decades the plants have to cope with the warming of the climate. As a consequence of this process more than half of the plant species could become vulnerable or threatened until 2080. Romania has a high plant diversity, with endemic and endangered plant species, the measures of biodiversity conservation being necessary. The integrated approach of biodiversity conservation involves both in situ and ex situ strategies. Among ex situ methods of conservation, besides the traditional ones (including field and botanic collection and seed banks, in vitro tissues techniques offer a viable alternative. The germplasm collections can efficiently preserve the species (of economic, scientific and conservative importance, in the same time being a source of plant material for international exchanges and for reintroduction in the native habitats.The "in vitro gene banking" term refers to in vitro tissues cultures from many accessions of a target species and involves the collection of plant material from field or from native habitats, the elaboration of sterilization, micropropagation and maintaining protocols. These collections have to be maintained in optimal conditions, morphologically and genetically characterized. The aim of our work was to characterize the response of the plant material to the minimal in vitro growth protocol for medium-term cultures achievement as a prerequisite condition for an active gene bank establishment in two rare Caryophyllaceae taxa: Dianthus spiculifolius and D. glacialis ssp. gelidus. Among different factors previously tested for medium-term preservation in Dianthus genus, mannitol proved to be more efficient for minimal cultures achievement. In vitro, the cultures were evaluated concerning their growth, regenerability and enzyme activity (POX, SOD, CAT as a response to the preservation conditions in the incipient phase of the initiation of the in vitro collection. The two species considered in this study showed a

  2. Constraining the dark side with observations

    International Nuclear Information System (INIS)

    Diez-Tejedor, Alberto

    2007-01-01

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations

  3. Constraining the dark side with observations

    Energy Technology Data Exchange (ETDEWEB)

    Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)

    2007-05-15

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.

  4. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  5. Bulk diffusion in a kinetically constrained lattice gas

    Science.gov (United States)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  6. A Defense of Semantic Minimalism

    Science.gov (United States)

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  7. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  8. Multivariable controller for discrete stochastic amplitude-constrained systems

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1983-04-01

    Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.

  9. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  10. Minimal thermodynamic conditions in the reservoir to produce steam at the Cerro Prieto geothermal field, BC; Condiciones termodinamicas minimas del yacimiento para producir vapor en el campo geotermico de Cerro Prieto, B.C.

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Rodriguez; Marco Helio [Comision Federal de Electricidad, Gerencia de Proyectos Geotermoelectricos, Residencia General de Cerro Prieto, Mexicali, Baja California (Mexico)]. E-mail: marco.rodriguez01@cfe.gob.mx

    2009-01-15

    Minimal thermodynamic conditions in the Cerro Prieto geothermal reservoir for steam production are defined, taking into account the minimal acceptable steam production at the surface, considering a rank of mixed-enthalpies for different well-depths, and allowing proper assessments for the impacts of the changes in fluid reservoir pressure and enthalpy. Factors able to influence steam production are discussed. They have to be considered when deciding whether or not to drill or repair a well in a particular area of the reservoir. These evaluations become much more relevant by considering the huge thermodynamic changes that have occurred at the Cerro Prieto geothermal reservoir from its development, starting in 1973, which has lead to abandoning some steam producing areas in the field. [Spanish] Las condiciones termodinamicas minimas del yacimiento geotermico de Cerro Prieto, BC, para producir vapor se determinan tomando en cuenta la minima produccion de vapor aceptable en superficie, considerando un rango de entalpias de la mezcla y para diferentes profundidades de pozos, lo que permite valorar adecuadamente el impacto de la evolucion de la presion y entalpia del fluido en el yacimiento. Se discuten los factores que pueden afectar la produccion de vapor, los cuales se deben tomar en cuenta para determinar la conveniencia o no de perforar o reparar un pozo en determinada zona del yacimiento. Estas evaluaciones adquieren gran relevancia al considerar los enormes cambios termodinamicos que ha presentado el yacimiento geotermico de Cerro Prieto, como resultado de su explotacion iniciada en 1973, lo que ha llevado a abandonar algunas zonas del campo para la produccion de vapor. Palabras Clave: Cerro Prieto, entalpia, evaluacion de yacimientos, politicas de explotacion, presion, produccion de vapor.

  11. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  12. Constrained supersymmetric flipped SU(5) GUT phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [CERN, TH Division, PH Department, Geneva 23 (Switzerland); King' s College London, Theoretical Physics and Cosmology Group, Department of Physics, London (United Kingdom); Mustafayev, Azar [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Olive, Keith A. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Stanford University, Department of Physics and SLAC, Palo Alto, CA (United States)

    2011-07-15

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M{sub in}, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tau}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2},m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M{sub in}, as we illustrate for several cases with tan {beta}=10 and 55. However, these features do not necessarily disappear at large M{sub in}, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  13. Constrained supersymmetric flipped SU(5) GUT phenomenology

    International Nuclear Information System (INIS)

    Ellis, John; Mustafayev, Azar; Olive, Keith A.

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M in , above the GUT scale, M GUT . We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino χ and the lighter stau τ 1 is sensitive to M in , as is the relationship between m χ and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m 1/2 ,m 0 ) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M in , as we illustrate for several cases with tan β=10 and 55. However, these features do not necessarily disappear at large M in , unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  14. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  15. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  16. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  17. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  18. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  19. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  20. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  1. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  2. Construction schedules slack time minimizing

    Science.gov (United States)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  3. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  4. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  5. Minimizing Exposure at Work

    Science.gov (United States)

    ; Environment Human Health Animal Health Safe Use Practices Food Safety Environment Air Water Soil Wildlife Home Page Pesticide Health and Safety Information Safe Use Practices Minimizing Exposure at Work Pesticides - Pennsylvania State University Cooperative Extension Personal Protective Equipment for Working

  6. Minimalism. Clip and Save.

    Science.gov (United States)

    Hubbard, Guy

    2002-01-01

    Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)

  7. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  8. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  9. Minimal and careful processing

    OpenAIRE

    Nielsen, Thorkild

    2004-01-01

    In several standards, guidelines and publications, organic food processing is strongly associated with "minimal processing" and "careful processing". The term "minimal processing" is nowadays often used in the general food processing industry and described in literature. The term "careful processing" is used more specifically within organic food processing but is not yet clearly defined. The concept of carefulness seems to fit very well with the processing of organic foods, especially if it i...

  10. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  11. Constrained Supersymmetric Flipped SU(5) GUT Phenomenology

    CERN Document Server

    Ellis, John; Olive, Keith A

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, $M_{in}$, above the GUT scale, $M_{GUT}$. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino and the lighter stau is sensitive to $M_{in}$, as is the relationship between the neutralino mass and the masses of the heavier Higgs bosons. For these reasons, prominent features in generic $(m_{1/2}, m_0)$ planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to $M_{in}$, as we illustrate for several cases with tan(beta)...

  12. Scheduling Aircraft Landings under Constrained Position Shifting

    Science.gov (United States)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  13. Should we still believe in constrained supersymmetry?

    International Nuclear Information System (INIS)

    Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin

    2013-01-01

    We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)

  14. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  15. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  16. Quantum cosmology of classically constrained gravity

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shang Yanwen

    2006-01-01

    In [G. Gabadadze, Y. Shang, hep-th/0506040] we discussed a classically constrained model of gravity. This theory contains known solutions of General Relativity (GR), and admits solutions that are absent in GR. Here we study cosmological implications of some of these new solutions. We show that a spatially-flat de Sitter universe can be created from 'nothing'. This universe has boundaries, and its total energy equals to zero. Although the probability to create such a universe is exponentially suppressed, it favors initial conditions suitable for inflation. Then we discuss a finite-energy solution with a nonzero cosmological constant and zero space-time curvature. There is no tunneling suppression to fluctuate into this state. We show that for a positive cosmological constant this state is unstable-it can rapidly transition to a de Sitter universe providing a new unsuppressed channel for inflation. For a negative cosmological constant the space-time flat solutions is stable.

  17. A new approach to the inverse kinematics of a multi-joint robot manipulator using a minimization method

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This paper proposes a new approach to solve the inverse kinematics of a type of sixlink manipulator. Directing our attention to features of joint structures of the manipulator, the original problem is first formulated by a system of equations with four variables and solved by means of a minimization technique. The remaining two variables are determined from constrained conditions involved. This is the basic idea in the present approach. The results of computer simulation of the present algorithm showed that the accuracies of solutions and convergence speed are much higher and quite satisfactory for practical purposes, as compared with the linearization-iteration method based on the conventional inverse Jacobian matrix. (author)

  18. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  19. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  20. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  1. Minimal quantization and confinement

    International Nuclear Information System (INIS)

    Ilieva, N.P.; Kalinowskij, Yu.L.; Nguyen Suan Han; Pervushin, V.N.

    1987-01-01

    A ''minimal'' version of the Hamiltonian quantization based on the explicit solution of the Gauss equation and on the gauge-invariance principle is considered. By the example of the one-particle Green function we show that the requirement for gauge invariance leads to relativistic covariance of the theory and to more proper definition of the Faddeev - Popov integral that does not depend on the gauge choice. The ''minimal'' quantization is applied to consider the gauge-ambiguity problem and a new topological mechanism of confinement

  2. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  3. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  4. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  5. Wronskian type solutions for the vector k-constrained KP hierarchy

    International Nuclear Information System (INIS)

    Zhang Youjin.

    1995-07-01

    Motivated by a relation of the 1-constrained Kadomtsev-Petviashvili (KP) hierarchy with the 2 component KP hierarchy, the tau-conditions of the vector k-constrained KP hierarchy are constructed by using an analogue of the Baker-Akhiezer (m + 1)-point function. These tau functions are expressed in terms of Wronskian type determinants. (author). 20 refs

  6. Minimal model holography

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Gopakumar, Rajesh

    2013-01-01

    We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

  7. Hazardous waste minimization

    International Nuclear Information System (INIS)

    Freeman, H.

    1990-01-01

    This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

  8. Minimally invasive distal pancreatectomy

    NARCIS (Netherlands)

    Røsok, Bård I.; de Rooij, Thijs; van Hilst, Jony; Diener, Markus K.; Allen, Peter J.; Vollmer, Charles M.; Kooby, David A.; Shrikhande, Shailesh V.; Asbun, Horacio J.; Barkun, Jeffrey; Besselink, Marc G.; Boggi, Ugo; Conlon, Kevin; Han, Ho Seong; Hansen, Paul; Kendrick, Michael L.; Kooby, David; Montagnini, Andre L.; Palanivelu, Chinnasamy; Wakabayashi, Go; Zeh, Herbert J.

    2017-01-01

    The first International conference on Minimally Invasive Pancreas Resection was arranged in conjunction with the annual meeting of the International Hepato-Pancreato-Biliary Association (IHPBA), in Sao Paulo, Brazil on April 19th 2016. The presented evidence and outcomes resulting from the session

  9. Minimal DBM Substraction

    DEFF Research Database (Denmark)

    David, Alexandre; Håkansson, John; G. Larsen, Kim

    In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

  10. [Minimally invasive coronary artery surgery].

    Science.gov (United States)

    Zalaquett, R; Howard, M; Irarrázaval, M J; Morán, S; Maturana, G; Becker, P; Medel, J; Sacco, C; Lema, G; Canessa, R; Cruz, F

    1999-01-01

    There is a growing interest to perform a left internal mammary artery (LIMA) graft to the left anterior descending coronary artery (LAD) on a beating heart through a minimally invasive access to the chest cavity. To report the experience with minimally invasive coronary artery surgery. Analysis of 11 patients aged 48 to 79 years old with single vessel disease that, between 1996 and 1997, had a LIMA graft to the LAD performed through a minimally invasive left anterior mediastinotomy, without cardiopulmonary bypass. A 6 to 10 cm left parasternal incision was done. The LIMA to the LAD anastomosis was done after pharmacological heart rate and blood pressure control and a period of ischemic pre conditioning. Graft patency was confirmed intraoperatively by standard Doppler techniques. Patients were followed for a mean of 11.6 months (7-15 months). All patients were extubated in the operating room and transferred out of the intensive care unit on the next morning. Seven patients were discharged on the third postoperative day. Duplex scanning confirmed graft patency in all patients before discharge; in two patients, it was confirmed additionally by arteriography. There was no hospital mortality, no perioperative myocardial infarction and no bleeding problems. After follow up, ten patients were free of angina, in functional class I and pleased with the surgical and cosmetic results. One patient developed atypical angina on the seventh postoperative month and a selective arteriography confirmed stenosis of the anastomosis. A successful angioplasty of the original LAD lesion was carried out. A minimally invasive left anterior mediastinotomy is a good surgical access to perform a successful LIMA to LAD graft without cardiopulmonary bypass, allowing a shorter hospital stay and earlier postoperative recovery. However, a larger experience and a longer follow up is required to define its role in the treatment of coronary artery disease.

  11. Power Absorption by Closely Spaced Point Absorbers in Constrained Conditions

    DEFF Research Database (Denmark)

    De Backer, G.; Vantorre, M.; Beels, C.

    2010-01-01

    The performance of an array of closely spaced point absorbers is numerically assessed in a frequency domain model Each point absorber is restricted to the heave mode and is assumed to have its own linear power take-off (PTO) system Unidirectional irregular incident waves are considered......, representing the wave climate at Westhinder on the Belgian Continental Shelf The impact of slamming, stroke and force restrictions on the power absorption is evaluated and optimal PTO parameters are determined For multiple bodies optimal control parameters (CP) are not only dependent on the incoming waves...

  12. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  13. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  14. Wavelet library for constrained devices

    Science.gov (United States)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  15. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  16. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  17. Minimally allowed neutrinoless double beta decay rates within an anarchical framework

    International Nuclear Information System (INIS)

    Jenkins, James

    2009-01-01

    Neutrinoless double beta decay (ββ0ν) is the only realistic probe of the Majorana nature of the neutrino. In the standard picture, its rate is proportional to m ee , the e-e element of the Majorana neutrino mass matrix in the flavor basis. I explore minimally allowed m ee values within the framework of mass matrix anarchy where neutrino parameters are defined statistically at low energies. Distributions of mixing angles are well defined by the Haar integration measure, but masses are dependent on arbitrary weighting functions and boundary conditions. I survey the integration measure parameter space and find that for sufficiently convergent weightings, m ee is constrained between (0.01-0.4) eV at 90% confidence. Constraints from neutrino mixing data lower these bounds. Singular integration measures allow for arbitrarily small m ee values with the remaining elements ill-defined, but this condition constrains the flavor structure of the model's ultraviolet completion. ββ0ν bounds below m ee ∼5x10 -3 eV should indicate symmetry in the lepton sector, new light degrees of freedom, or the Dirac nature of the neutrino.

  18. The ZOOM minimization package

    International Nuclear Information System (INIS)

    Fischler, Mark S.; Sachs, D.

    2004-01-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete

  19. Minimizing the Pacman effect

    International Nuclear Information System (INIS)

    Ritson, D.; Chou, W.

    1997-10-01

    The Pacman bunches will experience two deleterious effects: tune shift and orbit displacement. It is known that the tune shift can be compensated by arranging crossing planes 900 relative to each other at successive interaction points (lPs). This paper gives an analytical estimate of the Pacman orbit displacement for a single as well as for two crossings. For the latter, it can be minimized by using equal phase advances from one IP to another. In the LHC, this displacement is in any event small and can be neglected

  20. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  1. Abelian groups with a minimal generating set | Ruzicka ...

    African Journals Online (AJOL)

    We study the existence of minimal generating sets in Abelian groups. We prove that Abelian groups with minimal generating sets are not closed under quotients, nor under subgroups, nor under infinite products. We give necessary and sufficient conditions for existence of a minimal generating set providing that the Abelian ...

  2. How market environment may constrain global franchising in emerging markets

    OpenAIRE

    Baena Graciá, Verónica

    2011-01-01

    Although emerging markets are some of the fastest growing economies in the world and represent countries that are experiencing a substantial economic transformation, little is known about the factors influencing country selection for expansion in those markets. In an attempt to enhance the knowledge that managers and scholars have on franchising expansion, the present study examines how market conditions may constrain international diffusion of franchising in emerging markets. They are: i) ge...

  3. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  4. Deformed statistics Kullback–Leibler divergence minimization within a scaled Bregman framework

    International Nuclear Information System (INIS)

    Venkatesan, R.C.; Plastino, A.

    2011-01-01

    The generalized Kullback–Leibler divergence (K–Ld) in Tsallis statistics [constrained by the additive duality of generalized statistics (dual generalized K–Ld)] is here reconciled with the theory of Bregman divergences for expectations defined by normal averages, within a measure-theoretic framework. Specifically, it is demonstrated that the dual generalized K–Ld is a scaled Bregman divergence. The Pythagorean theorem is derived from the minimum discrimination information principle using the dual generalized K–Ld as the measure of uncertainty, with constraints defined by normal averages. The minimization of the dual generalized K–Ld, with normal averages constraints, is shown to exhibit distinctly unique features. -- Highlights: ► Dual generalized Kullback–Leibler divergence (K–Ld) proven to be scaled Bregman divergence in continuous measure-theoretic framework. ► Minimum dual generalized K–Ld condition established with normal averages constraints. ► Pythagorean theorem derived.

  5. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  6. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  7. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  8. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  9. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  10. Constraining Calcium Production in Novae

    Science.gov (United States)

    Tiwari, Pranjal; C. Fry, C. Wrede Team; A. Chen, J. Liang Collaboration; S. Bishop, T. Faestermann, D. Seiler Collaboration; R. Hertenberger, H. Wirth Collaboration

    2017-09-01

    Calcium is an element that can be produced by thermonuclear reactions in the hottest classical novae. There are discrepancies between the abundance of Calcium observed in novae and expectations based on astrophysical models. Unbound states 1 MeV above the proton threshold affect the production of Calcium in nova models because they act as resonances in the 38 K(p , γ) 39 Ca reaction present. This work describes an experiment to measure the energies of the excited states of 39 Ca . We will bombard a thin target of 40 Ca with a beam of 22 MeV deuterons, resulting in tritons and 39Ca. We will use a Q3D magnetic spectrograph from the MLL in Garching, Germany to momenta analyze the tritons to observe the excitation energies of the resulting 39 Ca states. Simulations have been run to determine the optimal spectrograph settings. We decided to use a chemically stable target composed of CaF2 , doing so resulted in an extra contaminant, Fluorine, which is dealt with by measuring the background from a LiF target. These simulations have led to settings and targets that will result in the observation of the 39 Ca states of interest with minimal interference from contaminants. Preliminary results from this experiment will be presented. National Sciences and Engineering Research Council of Canada and U.S. National Science Foundation.

  11. Conservação de rúcula minimamente processada produzida em campo aberto e cultivo protegido com agrotêxtil Conservation of minimally processed rocket produced under open field conditions and non woven polypropylene

    Directory of Open Access Journals (Sweden)

    Angela F Gonzalez

    2006-09-01

    Full Text Available Folhas de rúcula produzidas em campo aberto e sob cultivo protegido com agrotêxtil foram minimamente processadas, embaladas inteiras ou picadas em bandejas de poliestireno expandido e cobertas com filme de PVC de 14 micras. O delineamento adotado foi o inteiramente casualizado em esquema fatorial 2x2x2 (ambiente de cultivo, forma de preparo e refrigeração a 0(0C e 10(0C, com quatro repetições por tratamento, totalizando 32 bandejas. Os tratamentos foram armazenados a 0ºC e 10ºC por 10 dias, quando foram avaliadas as variáveis perda de massa (%; pH; sólidos solúveis; acidez titulável; cor e aparência. A conservação a 0ºC promoveu uma diminuição da perda de peso da rúcula minimamente processada. A utilização de folhas inteiras ou minimamente processadas foi significativa para sólidos solúveis sendo os maiores valores encontrados para as folhas inteiras. Para folhas picadas observou-se valores de acidez significativamente maiores para as produzidas sob ambiente natural. Independente da forma de preparo, a rúcula produzida em ambiente natural apresentou menor valor de pH. A cor e aparência da rúcula não foram influenciadas pelos tratamentos.Leaves of rocket salad produced under open field and non woven polypropylene were minimally processed and packed entire or pricked in polyestyrene trays covered with PVC film of 14 micras. The treatments were stored at 0(0C and 10(0C per 10 days, when the variables weight loss (%; pH; soluble solids; titratable acidity; colour and appearance were evaluated. The conservation under 0(0C promoted a reduction of weight loss on rocket salad minimally processed. Using entire or minimally processed leaves were significant for soluble solids the biggest values being found for entire leaves. For pricked leaves bigger values of acidity were observed for the produced ones under natural environment. Independent of the preparation form rocket salad produced under natural environment presented minor

  12. The effect of agency budgets on minimizing greenhouse gas emissions from road rehabilitation policies

    International Nuclear Information System (INIS)

    Reger, Darren; Madanat, Samer; Horvath, Arpad

    2015-01-01

    Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets. (letter)

  13. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  14. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  15. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  16. Client's Constraining Factors to Construction Project Management

    African Journals Online (AJOL)

    factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...

  17. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  18. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  19. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  20. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  1. Constraining the Surface Energy Balance of Snow in Complex Terrain

    Science.gov (United States)

    Lapo, Karl E.

    Physically-based snow models form the basis of our understanding of current and future water and energy cycles, especially in mountainous terrain. These models are poorly constrained and widely diverge from each other, demonstrating a poor understanding of the surface energy balance. This research aims to improve our understanding of the surface energy balance in regions of complex terrain by improving our confidence in existing observations and improving our knowledge of remotely sensed irradiances (Chapter 1), critically analyzing the representation of boundary layer physics within land models (Chapter 2), and utilizing relatively novel observations to in the diagnoses of model performance (Chapter 3). This research has improved the understanding of the literal and metaphorical boundary between the atmosphere and land surface. Solar irradiances are difficult to observe in regions of complex terrain, as observations are subject to harsh conditions not found in other environments. Quality control methods were developed to handle these unique conditions. These quality control methods facilitated an analysis of estimated solar irradiances over mountainous environments. Errors in the estimated solar irradiance are caused by misrepresenting the effect of clouds over regions of topography and regularly exceed the range of observational uncertainty (up to 80Wm -2) in all regions examined. Uncertainty in the solar irradiance estimates were especially pronounced when averaging over high-elevation basins, with monthly differences between estimates up to 80Wm-2. These findings can inform the selection of a method for estimating the solar irradiance and suggest several avenues of future research for improving existing methods. Further research probed the relationship between the land surface and atmosphere as it pertains to the stable boundary layers that commonly form over snow-covered surfaces. Stable conditions are difficult to represent, especially for low wind speed

  2. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  3. Swarm robotics and minimalism

    Science.gov (United States)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  4. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  5. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    Science.gov (United States)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  6. Resource Constrained Planning of Multiple Projects with Separable Activities

    Science.gov (United States)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  7. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells.

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-08-09

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs.

  8. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-01-01

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs. PMID:28773795

  9. Granular flows in constrained geometries

    Science.gov (United States)

    Murthy, Tejas; Viswanathan, Koushik

    Confined geometries are widespread in granular processing applications. The deformation and flow fields in such a geometry, with non-trivial boundary conditions, determine the resultant mechanical properties of the material (local porosity, density, residual stresses etc.). We present experimental studies of deformation and plastic flow of a prototypical granular medium in different nontrivial geometries- flat-punch compression, Couette-shear flow and a rigid body sliding past a granular half-space. These geometries represent simplified scaled-down versions of common industrial configurations such as compaction and dredging. The corresponding granular flows show a rich variety of flow features, representing the entire gamut of material types, from elastic solids (beam buckling) to fluids (vortex-formation, boundary layers) and even plastically deforming metals (dead material zone, pile-up). The effect of changing particle-level properties (e.g., shape, size, density) on the observed flows is also explicitly demonstrated. Non-smooth contact dynamics particle simulations are shown to reproduce some of the observed flow features quantitatively. These results showcase some central challenges facing continuum-scale constitutive theories for dynamic granular flows.

  10. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  11. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  12. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  13. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  14. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  15. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  16. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    Science.gov (United States)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  17. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  18. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  19. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  20. Studies on the radiation chemistry of biomolecules in aqueous solution with specific objective of minimizing their radiolytic degradation. Coordinated programme for Asia and the Pacific Region on radiation sterilization practices significant to local medical supplies and conditions

    International Nuclear Information System (INIS)

    Narayana Rao, K.

    1979-01-01

    As part of a study of radiolytic degradation of pharmaceuticals during radiosterilization, the basic radiation chemistry of the B-group vitamins, nicotinamide, pyridoxin, riboflavin and thiamine, and the reaction of hydrogen peroxide with these same materials has been investigated. The various aspects studied were - radiolysis under controlled conditions, effects of phase, temperature, pH and nature and concentration of additives. Some of the conclusions are: 1) with oxygen saturated aqueous solutions containing glucose, the radiolytic degradation of the vitamins is reduced: 2) results a similar for N 2 O saturated aqueous solutions; 3) in glucose-containing solutions, the protective effect is considerably modified at higher temperatures; and 4) irradiation of air-saturated aqueous solutions in the frozen state leads to reduced decomposition. It is concluded that in the presence of oxygen, in frozen matrices at low temperature, it appears possible to reduce the radiolytic breakdown of vitamins to low levels

  1. Dimensionally constrained energy confinement analysis of W7-AS data

    International Nuclear Information System (INIS)

    Dose, V.; Preuss, R.; Linden, W. von der

    1998-01-01

    A recently assembled W7-AS stellarator database has been subject to dimensionally constrained confinement analysis. The analysis employs Bayesian inference. Dimensional information is taken from the Connor-Taylor (CT) similarity transformation theory, which provides six possible physical scenarios with associated dimensional conditions. Bayesian theory allows the calculations of the probability for each model and it is found that the present W7-AS data are most probably described by the collisionless high-β case. Probabilities for all models and the associated exponents of a power law scaling function are presented. (author)

  2. On gauge fixing and quantization of constrained Hamiltonian systems

    International Nuclear Information System (INIS)

    Dayi, O.F.

    1989-06-01

    In constrained Hamiltonian systems which possess first class constraints some subsidiary conditions should be imposed for detecting physical observables. This issue and quantization of the system are clarified. It is argued that the reduced phase space and Dirac method of quantization, generally, differ only in the definition of the Hilbert space one should use. For the dynamical systems possessing second class constraints the definition of physical Hilbert space in the BFV-BRST operator quantization method is different from the usual definition. (author). 18 refs

  3. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... for hard input constraints. In more detail, we parameterise the MPC predictions in terms of the primary Youla parameter and use this parameter as the on-line optimisation variable. The uncertainty is parameterised in terms of the dual Youla parameter. Stability can then be guaranteed through small gain...

  4. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  5. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  6. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  7. Client's constraining factors to construction project management ...

    African Journals Online (AJOL)

    This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...

  8. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  9. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  10. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  11. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  12. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...

  13. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  14. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  15. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  16. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....

  17. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  18. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  19. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  20. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  1. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  2. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  3. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  4. Fundamental relativistic rotator: Hessian singularity and the issue of the minimal interaction with electromagnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Bratek, Lukasz, E-mail: lukasz.bratek@ifj.edu.pl [Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Radzikowskego 152, PL-31342 Krakow (Poland)

    2011-05-13

    There are two relativistic rotators with Casimir invariants of the Poincare group being fixed parameters. The particular models of spinning particles were studied in the past both at the classical and quantum level. Recently, a minimal interaction with electromagnetic field has been considered. We show that the dynamical systems can be uniquely singled out from among other relativistic rotators by the unphysical requirement that the Hessian referring to the physical degrees of freedom should be singular. Closely related is the fact that the equations of free motion are not independent, making the evolution indeterminate. We show that the Hessian singularity cannot be removed by the minimal interaction with the electromagnetic field. By making use of a nontrivial Hessian null space, we show that a single constraint appears in the external field for consistency of the equations of motion with the Hessian singularity. The constraint imposes unphysical limitation on the initial conditions and admissible motions. We discuss the mechanism of appearance of unique solutions in external fields on an example of motion in the uniform magnetic field. We give a simple model to illustrate that similarly constrained evolution cannot be determinate in arbitrary fields.

  5. Security-Constrained Unit Commitment in AC Microgrids Considering Stochastic Price-Based Demand Response and Renewable Generation

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    In this paper, a stochastic model for scheduling of AC security‐constrained unit commitment associated with demand response (DR) actions is developed in an islanded residential microgrid. The proposed model maximizes the expected profit of microgrid operator and minimizes the total customers...

  6. Procedures minimally invasive image-guided

    International Nuclear Information System (INIS)

    Mora Guevara, Alejandro

    2011-01-01

    A literature review focused on minimally invasive procedures, has been performed at the Department of Radiology at the Hospital Calderon Guardia. A multidisciplinary team has been raised for decision making. The materials, possible complications and the available imaging technique such as ultrasound, computed tomography, magnetic resonance imaging, have been determined according to the procedure to be performed. The revision has supported medical interventions didactically enjoying the best materials, resources and conditions for a successful implementation of procedures and results [es

  7. Theories of minimalism in architecture: When prologue becomes palimpsest

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2014-01-01

    Full Text Available This paper examines the modus and conditions of constituting and establishing architectural discourse on minimalism. One of the key topics in this discourse are historical line of development and the analysis of theoretical influences, which comprise connections of recent minimalism with the theorizations of various minimal, architectural and artistic, forms and concepts from the past. The paper shall particularly discuss those theoretical relations which, in a unitary way, link minimalism in architecture with its artistic nominal counterpart - minimal art. These are the relations founded on the basis of interpretative models on self-referentiality, phenomenological experience and contextualism, which are superficialy observed, common to both, artistic and architectural, minimalist discourses. It seems that in this constellation certain relations on the historical line of minimalism in architecture are questionable, while some other are overlooked. Precisely, posmodern fundamentalism is the architectural direction: 1 in which these three interpretations also existed; 2 from which architectural theorists retroactively appropriated many architects proclaiming them minimalists; 3 which establish identical relations with modern and postmodern theoretical and socio-historical contexts, as well as it will be done in minimalism. In spite of this, theoretical field of postmodern fundamentalism is surprisingly neglected in the discourse of minimalism in architecture. Instead of understanding postmodern fundamentalism as a kind of prologue to minimalism in architecture, it becomes an erased palimpsest over whom the different history of minimalism is rewriting, the history in which minimal art which occupies a central place.

  8. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  9. Minimal solution of general dual fuzzy linear systems

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Otadi, M.; Mosleh, M.

    2008-01-01

    Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered

  10. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  11. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Thuot, J.R.; Moos, L.

    1996-01-01

    Environmental restoration, decontamination and decommissioning, and facility dismantlement projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized; however, there are significant areas where waste and cost can be reduced by careful planning and execution. Waste reduction can occur in three ways: beneficial reuse or recycling, segregation of waste types, and reducing generation of secondary waste

  12. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Moos, L.; Thuot, J.R.

    1996-01-01

    Environmental restoration, decontamination and decommissioning and facility dismantelment projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized. In fact, however, there are three significant areas where waste and cost can be reduced. Waste reduction can occur in three ways: beneficial reuse or recycling; segregation of waste types; and reducing generation of secondary waste. This paper will discuss several examples of reuse, recycle, segregation, and secondary waste reduction at ANL restoration programs

  13. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  14. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  15. Minimally Invasive Surgery (MIS) Approaches to Thoracolumbar Trauma.

    Science.gov (United States)

    Kaye, Ian David; Passias, Peter

    2018-03-01

    Minimally invasive surgical (MIS) techniques offer promising improvements in the management of thoracolumbar trauma. Recent advances in MIS techniques and instrumentation for degenerative conditions have heralded a growing interest in employing these techniques for thoracolumbar trauma. Specifically, surgeons have applied these techniques to help manage flexion- and extension-distraction injuries, neurologically intact burst fractures, and cases of damage control. Minimally invasive surgical techniques offer a means to decrease blood loss, shorten operative time, reduce infection risk, and shorten hospital stays. Herein, we review thoracolumbar minimally invasive surgery with an emphasis on thoracolumbar trauma classification, minimally invasive spinal stabilization, surgical indications, patient outcomes, technical considerations, and potential complications.

  16. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  17. Wilson loops in minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS/CFT correspondence suggests that the Wilson loop of the large N gauge theory with N = 4 supersymmetry in 4 dimensions is described by a minimal surface in AdS 5 x S 5 . The authors examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which the authors call BPS loops, whose expectation values are free from ultra-violet divergence. They formulate the loop equation for such loops. To the extent that they have checked, the minimal surface in AdS 5 x S 5 gives a solution of the equation. The authors also discuss the zig-zag symmetry of the loop operator. In the N = 4 gauge theory, they expect the zig-zag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. They will show how this is realized for the minimal surface

  18. Classical strings and minimal surfaces

    International Nuclear Information System (INIS)

    Urbantke, H.

    1986-01-01

    Real Lorentzian forms of some complex or complexified Euclidean minimal surfaces are obtained as an application of H.A. Schwarz' solution to the initial value problem or a search for surfaces admitting a group of Poincare transformations. (Author)

  19. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  20. Wilson loops and minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS-CFT correspondence suggests that the Wilson loop of the large N gauge theory with N=4 supersymmetry in four dimensions is described by a minimal surface in AdS 5 xS 5 . We examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which we call BPS loops, whose expectation values are free from ultraviolet divergence. We formulate the loop equation for such loops. To the extent that we have checked, the minimal surface in AdS 5 xS 5 gives a solution of the equation. We also discuss the zigzag symmetry of the loop operator. In the N=4 gauge theory, we expect the zigzag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. We will show how this is realized for the minimal surface. (c) 1999 The American Physical Society

  1. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  2. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  3. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  4. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  5. Q-deformed systems and constrained dynamics

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1993-01-01

    It is shown that quantum theories of the q-deformed harmonic oscillator and one-dimensional free q-particle (a free particle on the 'quantum' line) can be obtained by the canonical quantization of classical Hamiltonian systems with commutative phase-space variables and a non-trivial symplectic structure. In the framework of this approach, classical dynamics of a particle on the q-line coincides with the one of a free particle with friction. It is argued that q-deformed systems can be treated as ordinary mechanical systems with the second-class constraints. In particular, second-class constrained systems corresponding to the q-oscillator and q-particle are given. A possibility of formulating q-deformed systems via gauge theories (first-class constrained systems) is briefly discussed. (orig.)

  6. Minimally inconsistent reasoning in Semantic Web.

    Science.gov (United States)

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  7. Minimally inconsistent reasoning in Semantic Web.

    Directory of Open Access Journals (Sweden)

    Xiaowang Zhang

    Full Text Available Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical description logic reasoning.

  8. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Minimal string theory is logarithmic

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

    2005-01-01

    We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

  10. Annual Waste Minimization Summary Report

    International Nuclear Information System (INIS)

    Haworth, D.M.

    2011-01-01

    This report summarizes the waste minimization efforts undertaken by National Security TechnoIogies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2010. The NNSA/NSO Pollution Prevention Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment.

  11. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  12. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  13. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  14. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  15. Constraining neutron star matter with Quantum Chromodynamics

    CERN Document Server

    Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi

    2014-01-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.

  16. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  17. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  18. Kinyarwanda locative applicatives and the Minimal Link Condition ...

    African Journals Online (AJOL)

    ... element β of the same type which is closer to K. We show that the theme cannot move in Kinyarwanda locative applicatives because the applied object is closer to the potential landing site. However, in contexts in which the applied object has been moved 'out of the way', the MLC no longer blocks movement of the theme.

  19. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  20. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  1. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  2. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  3. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  4. Antifungal susceptibility testing method for resource constrained laboratories

    Directory of Open Access Journals (Sweden)

    Khan S

    2006-01-01

    Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.

  5. Topological gravity with minimal matter

    International Nuclear Information System (INIS)

    Li Keke

    1991-01-01

    Topological minimal matter, obtained by twisting the minimal N = 2 supeconformal field theory, is coupled to two-dimensional topological gravity. The free field formulation of the coupled system allows explicit representations of BRST charge, physical operators and their correlation functions. The contact terms of the physical operators may be evaluated by extending the argument used in a recent solution of topological gravity without matter. The consistency of the contact terms in correlation functions implies recursion relations which coincide with the Virasoro constraints derived from the multi-matrix models. Topological gravity with minimal matter thus provides the field theoretic description for the multi-matrix models of two-dimensional quantum gravity. (orig.)

  6. Minimal Marking: A Success Story

    Directory of Open Access Journals (Sweden)

    Anne McNeilly

    2014-11-01

    Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.

  7. Non-minimal inflation revisited

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Shafizadeh, Somayeh

    2010-01-01

    We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

  8. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  9. Harm minimization among teenage drinkers

    DEFF Research Database (Denmark)

    Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup

    2007-01-01

    AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...

  10. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  11. Incomplete Dirac reduction of constrained Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandre, C., E-mail: chandre@cpt.univ-mrs.fr

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  12. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....

  13. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  14. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  15. Euclidean wormholes with minimally coupled scalar fields

    International Nuclear Information System (INIS)

    Ruz, Soumendranath; Modak, Bijan; Debnath, Subhra; Sanyal, Abhik Kumar

    2013-01-01

    A detailed study of quantum and semiclassical Euclidean wormholes for Einstein's theory with a minimally coupled scalar field has been performed for a class of potentials. Massless, constant, massive (quadratic in the scalar field) and inverse (linear) potentials admit the Hawking and Page wormhole boundary condition both in the classically forbidden and allowed regions. An inverse quartic potential has been found to exhibit a semiclassical wormhole configuration. Classical wormholes under a suitable back-reaction leading to a finite radius of the throat, where the strong energy condition is satisfied, have been found for the zero, constant, quadratic and exponential potentials. Treating such classical Euclidean wormholes as an initial condition, a late stage of cosmological evolution has been found to remain unaltered from standard Friedmann cosmology, except for the constant potential which under the back-reaction produces a term like a negative cosmological constant. (paper)

  16. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  17. Restoration ecology: two-sex dynamics and cost minimization.

    Directory of Open Access Journals (Sweden)

    Ferenc Molnár

    Full Text Available We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  18. Restoration ecology: two-sex dynamics and cost minimization.

    Science.gov (United States)

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  19. Sufficient Descent Conjugate Gradient Methods for Solving Convex Constrained Nonlinear Monotone Equations

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.

  20. Non-minimal supersymmetric models. LHC phenomenolgy and model discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Krauss, Manuel Ernst

    2015-12-18

    It is generally agreed upon the fact that the Standard Model of particle physics can only be viewed as an effective theory that needs to be extended as it leaves some essential questions unanswered. The exact realization of the necessary extension is subject to discussion. Supersymmetry is among the most promising approaches to physics beyond the Standard Model as it can simultaneously solve the hierarchy problem and provide an explanation for the dark matter abundance in the universe. Despite further virtues like gauge coupling unification and radiative electroweak symmetry breaking, minimal supersymmetric models cannot be the ultimate answer to the open questions of the Standard Model as they still do not incorporate neutrino masses and are besides heavily constrained by LHC data. This does, however, not derogate the beauty of the concept of supersymmetry. It is therefore time to explore non-minimal supersymmetric models which are able to close these gaps, review their consistency, test them against experimental data and provide prospects for future experiments. The goal of this thesis is to contribute to this process by exploring an extraordinarily well motivated class of models which bases upon a left-right symmetric gauge group. While relaxing the tension with LHC data, those models automatically include the ingredients for neutrino masses. We start with a left-right supersymmetric model at the TeV scale in which scalar SU(2){sub R} triplets are responsible for the breaking of left-right symmetry as well as for the generation of neutrino masses. Although a tachyonic doubly-charged scalar is present at tree-level in this kind of models, we show by performing the first complete one-loop evaluation that it gains a real mass at the loop level. The constraints on the predicted additional charged gauge bosons are then evaluated using LHC data, and we find that we can explain small excesses in the data of which the current LHC run will reveal if they are actual new

  1. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  2. Constraining the break of spatial diffeomorphism invariance with Planck data

    Science.gov (United States)

    Graef, L. L.; Benetti, M.; Alcaniz, J. S.

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r, and the tensor spectral index, nt. If the inflation consistency relation is imposed, r=-8 nt, we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  3. Constraining the break of spatial diffeomorphism invariance with Planck data

    Energy Technology Data Exchange (ETDEWEB)

    Graef, L.L.; Benetti, M.; Alcaniz, J.S., E-mail: leilagraef@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br [Departamento de Astronomia, Observatório Nacional, R. Gen. José Cristino, 77—São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r , and the tensor spectral index, n {sub t} . If the inflation consistency relation is imposed, r =−8 n {sub t} , we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  4. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.

  5. Isoperimetric inequalities for minimal graphs

    International Nuclear Information System (INIS)

    Pacelli Bessa, G.; Montenegro, J.F.

    2007-09-01

    Based on Markvorsen and Palmer's work on mean time exit and isoperimetric inequalities we establish slightly better isoperimetric inequalities and mean time exit estimates for minimal graphs in N x R. We also prove isoperimetric inequalities for submanifolds of Hadamard spaces with tamed second fundamental form. (author)

  6. Torsional Rigidity of Minimal Submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, Vicente

    2006-01-01

    We prove explicit upper bounds for the torsional rigidity of extrinsic domains of minimal submanifolds $P^m$ in ambient Riemannian manifolds $N^n$ with a pole $p$. The upper bounds are given in terms of the torsional rigidities of corresponding Schwarz symmetrizations of the domains in warped...

  7. The debate on minimal deterrence

    International Nuclear Information System (INIS)

    Arbatov, A.; Karp, R.C.; Toth, T.

    1993-01-01

    Revitalization of debates on minimal nuclear deterrence at the present time is induced by the end of the Cold War and a number of unilateral and bilateral actions by the great powers to curtail nuclear arms race and reduce nuclear weapons arsenals

  8. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-01-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

  9. Minimizing TLD-DRD differences

    International Nuclear Information System (INIS)

    Riley, D.L.; McCoy, R.A.; Connell, W.D.

    1987-01-01

    When substantial differences exist in exposures recorded by TLD's and DRD's, it is often necessary to perform an exposure investigation to reconcile the difference. In working with several operating plants, the authors have observed a number of causes for these differences. This paper outlines these observations and discusses procedures that can be used to minimize them

  10. Acquiring minimally invasive surgical skills

    NARCIS (Netherlands)

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.

  11. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  12. Fringe instability in constrained soft elastic layers.

    Science.gov (United States)

    Lin, Shaoting; Cohen, Tal; Zhang, Teng; Yuk, Hyunwoo; Abeyaratne, Rohan; Zhao, Xuanhe

    2016-11-04

    Soft elastic layers with top and bottom surfaces adhered to rigid bodies are abundant in biological organisms and engineering applications. As the rigid bodies are pulled apart, the stressed layer can exhibit various modes of mechanical instabilities. In cases where the layer's thickness is much smaller than its length and width, the dominant modes that have been studied are the cavitation, interfacial and fingering instabilities. Here we report a new mode of instability which emerges if the thickness of the constrained elastic layer is comparable to or smaller than its width. In this case, the middle portion along the layer's thickness elongates nearly uniformly while the constrained fringe portions of the layer deform nonuniformly. When the applied stretch reaches a critical value, the exposed free surfaces of the fringe portions begin to undulate periodically without debonding from the rigid bodies, giving the fringe instability. We use experiments, theory and numerical simulations to quantitatively explain the fringe instability and derive scaling laws for its critical stress, critical strain and wavelength. We show that in a force controlled setting the elastic fingering instability is associated with a snap-through buckling that does not exist for the fringe instability. The discovery of the fringe instability will not only advance the understanding of mechanical instabilities in soft materials but also have implications for biological and engineered adhesives and joints.

  13. Minimization and segregation of radioactive wastes

    International Nuclear Information System (INIS)

    1992-07-01

    The report will serve as one of a series of technical manuals providing reference material and direct know-how to staff in radioisotope user establishments and research centres in Member States without nuclear power and the associated range of complex waste management operations. Considerations are limited to the minimization and segregation of wastes, these being initial steps on which the efficiency of the whole waste management system depends. The minimization and segregation operations are examined in the context of the restricted quantities and predominantly shorter lived activities of wastes from nuclear research, production and usage of radioisotopes. Liquid and solid wastes only are considered in the report. Gaseous waste minimization and treatment are specialized subjects and are not examined in this document. Gaseous effluent treatment in facilities handling low and intermediate level radioactive materials has been already the subject of a detailed IAEA report. Management of spent sealed sources has specifically been covered in a previous manual. Conditioned sealed sources must be taken into account in segregation arrangements for interim storage and disposal where there are exceptional long lived highly radiotoxic isotopes, particularly radium or americium. These are unlikely ever to be suitable for shallow land burial along with the remaining wastes. 30 refs, 5 figs, 8 tabs

  14. Opportunity Loss Minimization and Newsvendor Behavior

    Directory of Open Access Journals (Sweden)

    Xinsheng Xu

    2017-01-01

    Full Text Available To study the decision bias in newsvendor behavior, this paper introduces an opportunity loss minimization criterion into the newsvendor model with backordering. We apply the Conditional Value-at-Risk (CVaR measure to hedge against the potential risks from newsvendor’s order decision. We obtain the optimal order quantities for a newsvendor to minimize the expected opportunity loss and CVaR of opportunity loss. It is proven that the newsvendor’s optimal order quantity is related to the density function of market demand when the newsvendor exhibits risk-averse preference, which is inconsistent with the results in Schweitzer and Cachon (2000. The numerical example shows that the optimal order quantity that minimizes CVaR of opportunity loss is bigger than expected profit maximization (EPM order quantity for high-profit products and smaller than EPM order quantity for low-profit products, which is different from the experimental results in Schweitzer and Cachon (2000. A sensitivity analysis of changing the operation parameters of the two optimal order quantities is discussed. Our results confirm that high return implies high risk, while low risk comes with low return. Based on the results, some managerial insights are suggested for the risk management of the newsvendor model with backordering.

  15. Resource Constrained Project Scheduling Subject to Due Dates: Preemption Permitted with Penalty

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available Extensive research works have been carried out in resource constrained project scheduling problem. However, scarce researches have studied the problems in which a setup cost must be incurred if activities are preempted. In this research, we investigate the resource constrained project scheduling problem to minimize the total project cost, considering earliness-tardiness and preemption penalties. A mixed integer programming formulation is proposed for the problem. The resulting problem is NP-hard. So, we try to obtain a satisfying solution using simulated annealing (SA algorithm. The efficiency of the proposed algorithm is tested based on 150 randomly produced examples. Statistical comparison in terms of the computational times and objective function indicates that the proposed algorithm is efficient and effective.

  16. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  17. Constrained reaction volume approach for studying chemical kinetics behind reflected shock waves

    KAUST Repository

    Hanson, Ronald K.

    2013-09-01

    We report a constrained-reaction-volume strategy for conducting kinetics experiments behind reflected shock waves, achieved in the present work by staged filling in a shock tube. Using hydrogen-oxygen ignition experiments as an example, we demonstrate that this strategy eliminates the possibility of non-localized (remote) ignition in shock tubes. Furthermore, we show that this same strategy can also effectively eliminate or minimize pressure changes due to combustion heat release, thereby enabling quantitative modeling of the kinetics throughout the combustion event using a simple assumption of specified pressure and enthalpy. We measure temperature and OH radical time-histories during ethylene-oxygen combustion behind reflected shock waves in a constrained reaction volume and verify that the results can be accurately modeled using a detailed mechanism and a specified pressure and enthalpy constraint. © 2013 The Combustion Institute.

  18. Hazardous waste minimization tracking system

    International Nuclear Information System (INIS)

    Railan, R.

    1994-01-01

    Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

  19. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  20. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  1. Assessment of oscillatory stability constrained available transfer capability

    International Nuclear Information System (INIS)

    Jain, T.; Singh, S.N.; Srivastava, S.C.

    2009-01-01

    This paper utilizes a bifurcation approach to compute oscillatory stability constrained available transfer capability (ATC) in an electricity market having bilateral as well as multilateral transactions. Oscillatory instability in non-linear systems can be related to Hopf bifurcation. At the Hopf bifurcation, one pair of the critical eigenvalues of the system Jacobian reaches imaginary axis. A new optimization formulation, including Hopf bifurcation conditions, has been developed in this paper to obtain the dynamic ATC. An oscillatory stability based contingency screening index, which takes into account the impact of transactions on severity of contingency, has been utilized to identify critical contingencies to be considered in determining ATC. The proposed method has been applied for dynamic ATC determination on a 39-bus New England system and a practical 75-bus Indian system considering composite static load as well as dynamic load models. (author)

  2. Traversable geometric dark energy wormholes constrained by astrophysical observations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Meng, Xin-he [Nankai University, Department of Physics, Tianjin (China); Institute of Theoretical Physics, CAS, State Key Lab of Theoretical Physics, Beijing (China)

    2016-09-15

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω < -1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω{sub X} < -1 (or z < 0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology. (orig.)

  3. Traversable geometric dark energy wormholes constrained by astrophysical observations

    International Nuclear Information System (INIS)

    Wang, Deng; Meng, Xin-he

    2016-01-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω < -1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω X < -1 (or z < 0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology. (orig.)

  4. Minimizing the Fluid Used to Induce Fracturing

    Science.gov (United States)

    Boyle, E. J.

    2015-12-01

    The less fluid injected to induce fracturing means less fluid needing to be produced before gas is produced. One method is to inject as fast as possible until the desired fracture length is obtained. Presented is an alternative injection strategy derived by applying optimal system control theory to the macroscopic mass balance. The picture is that the fracture is constant in aperture, fluid is injected at a controlled rate at the near end, and the fracture unzips at the far end until the desired length is obtained. The velocity of the fluid is governed by Darcy's law with larger permeability for flow along the fracture length. Fracture growth is monitored through micro-seismicity. Since the fluid is assumed to be incompressible, the rate at which fluid is injected is balanced by rate of fracture growth and rate of loss to bounding rock. Minimizing injected fluid loss to the bounding rock is the same as minimizing total injected fluid How to change the injection rate so as to minimize the total injected fluid is a problem in optimal control. For a given total length, the variation of the injected rate is determined by variations in overall time needed to obtain the desired fracture length, the length at any time, and the rate at which the fracture is growing at that time. Optimal control theory leads to a boundary condition and an ordinary differential equation in time whose solution is an injection protocol that minimizes the fluid used under the stated assumptions. That method is to monitor the rate at which the square of the fracture length is growing and adjust the injection rate proportionately.

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  7. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  8. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  9. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  10. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  11. Minimalism and the Pragmatic Frame

    Directory of Open Access Journals (Sweden)

    Ana Falcato

    2016-02-01

    Full Text Available In the debate between literalism and contextualism in semantics, Kent Bach’s project is often taken to stand on the latter side of the divide. In this paper I argue this is a misleading assumption and justify it by contrasting Bach’s assessment of the theoretical eliminability of minimal propositions arguably expressed by well-formed sentences with standard minimalist views, and by further contrasting his account of the division of interpretative processes ascribable to the semantics and pragmatics of a language with a parallel analysis carried out by the most radical opponent to semantic minimalism, i.e., by occasionalism. If my analysis proves right, the sum of its conclusions amounts to a refusal of Bach’s main dichotomies.

  12. Principle of minimal work fluctuations.

    Science.gov (United States)

    Xiao, Gaoyang; Gong, Jiangbin

    2015-08-01

    Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].

  13. Optimizing Processes to Minimize Risk

    Science.gov (United States)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  14. Minimal Length, Measurability and Gravity

    Directory of Open Access Journals (Sweden)

    Alexander Shalyt-Margolin

    2016-03-01

    Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

  15. Minimal massive 3D gravity

    International Nuclear Information System (INIS)

    Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

  16. Acquiring minimally invasive surgical skills

    OpenAIRE

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room. This thesis has led to an enlarged insight in the organization of surgical skills training during residency training of surgical medical specialists.

  17. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  18. Constrained mathematics evaluation in probabilistic logic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Arlin Cooper, J

    1998-06-01

    A challenging problem in mathematically processing uncertain operands is that constraints inherent in the problem definition can require computations that are difficult to implement. Examples of possible constraints are that the sum of the probabilities of partitioned possible outcomes must be one, and repeated appearances of the same variable must all have the identical value. The latter, called the 'repeated variable problem', will be addressed in this paper in order to show how interval-based probabilistic evaluation of Boolean logic expressions, such as those describing the outcomes of fault trees and event trees, can be facilitated in a way that can be readily implemented in software. We will illustrate techniques that can be used to transform complex constrained problems into trivial problems in most tree logic expressions, and into tractable problems in most other cases.

  19. Constraining dark sectors with monojets and dijets

    International Nuclear Information System (INIS)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai; McCullough, Matthew

    2015-03-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  20. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  1. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both synthetic...

  2. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  3. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang; Yang, Yijun; Pottmann, Helmut; Mitra, Niloy J.

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  4. Constrained vertebrate evolution by pleiotropic genes.

    Science.gov (United States)

    Hu, Haiyang; Uesaka, Masahiro; Guo, Song; Shimai, Kotaro; Lu, Tsai-Ming; Li, Fang; Fujimoto, Satoko; Ishikawa, Masato; Liu, Shiping; Sasagawa, Yohei; Zhang, Guojie; Kuratani, Shigeru; Yu, Jr-Kai; Kusakabe, Takehiro G; Khaitovich, Philipp; Irie, Naoki

    2017-11-01

    Despite morphological diversification of chordates over 550 million years of evolution, their shared basic anatomical pattern (or 'bodyplan') remains conserved by unknown mechanisms. The developmental hourglass model attributes this to phylum-wide conserved, constrained organogenesis stages that pattern the bodyplan (the phylotype hypothesis); however, there has been no quantitative testing of this idea with a phylum-wide comparison of species. Here, based on data from early-to-late embryonic transcriptomes collected from eight chordates, we suggest that the phylotype hypothesis would be better applied to vertebrates than chordates. Furthermore, we found that vertebrates' conserved mid-embryonic developmental programmes are intensively recruited to other developmental processes, and the degree of the recruitment positively correlates with their evolutionary conservation and essentiality for normal development. Thus, we propose that the intensively recruited genetic system during vertebrates' organogenesis period imposed constraints on its diversification through pleiotropic constraints, which ultimately led to the common anatomical pattern observed in vertebrates.

  5. Constraining Lyman continuum escape using Machine Learning

    Science.gov (United States)

    Giri, Sambit K.; Zackrisson, Erik; Binggeli, Christian; Pelckmans, Kristiaan; Cubo, Rubén; Mellema, Garrelt

    2018-05-01

    The James Webb Space Telescope (JWST) will observe the rest-frame ultraviolet/optical spectra of galaxies from the epoch of reionization (EoR) in unprecedented detail. While escaping into the intergalactic medium, hydrogen-ionizing (Lyman continuum; LyC) photons from the galaxies will contribute to the bluer end of the UV slope and make nebular emission lines less prominent. We present a method to constrain leakage of the LyC photons using the spectra of high redshift (z >~ 6) galaxies. We simulate JWST/NIRSpec observations of galaxies at z =6-9 by matching the fluxes of galaxies observed in the Frontier Fields observations of galaxy cluster MACS-J0416. Our method predicts the escape fraction fesc with a mean absolute error Δfesc ~ 0.14. The method also predicts the redshifts of the galaxies with an error .

  6. Statistical mechanics of budget-constrained auctions

    International Nuclear Information System (INIS)

    Altarelli, F; Braunstein, A; Realpe-Gomez, J; Zecchina, R

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise

  7. Constraining Dark Sectors with Monojets and Dijets

    CERN Document Server

    Chala, Mikael; McCullough, Matthew; Nardini, Germano; Schmidt-Hoberg, Kai

    2015-01-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simpli...

  8. Statistical mechanics of budget-constrained auctions

    Science.gov (United States)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  9. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  10. Constraining dark sectors with monojets and dijets

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); McCullough, Matthew [European Organization for Nuclear Research (CERN), Geneva (Switzerland). Theory Div.

    2015-03-15

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  11. Hard exclusive meson production to constrain GPDs

    Energy Technology Data Exchange (ETDEWEB)

    Wolbeek, Johannes ter; Fischer, Horst; Gorzellik, Matthias; Gross, Arne; Joerg, Philipp; Koenigsmann, Kay; Malm, Pasquale; Regali, Christopher; Schmidt, Katharina; Sirtl, Stefan; Szameitat, Tobias [Physikalisches Institut, Albert-Ludwigs-Universitaet Freiburg, Freiburg im Breisgau (Germany); Collaboration: COMPASS Collaboration

    2014-07-01

    The concept of Generalized Parton Distributions (GPDs) combines the two-dimensional spatial information, given by form factors, with the longitudinal momentum information from the PDFs. Thus, GPDs provide a three-dimensional 'tomography' of the nucleon. Furthermore, according to Ji's sum rule, the GPDs H and E enable access to the total angular momenta of quarks, antiquarks and gluons. While H can be approached using electroproduction cross section, hard exclusive meson production off a transversely polarized target can help to constrain the GPD E. At the COMPASS experiment at CERN, two periods of data taking were performed in 2007 and 2010, using a longitudinally polarized 160 GeV/c muon beam and a transversely polarized NH{sub 3} target. This talk introduces the data analysis of the process μ + p → μ' + p' + V, and recent results are presented.

  12. Minimization of heatwave morbidity and mortality.

    Science.gov (United States)

    Kravchenko, Julia; Abernethy, Amy P; Fawzy, Maria; Lyerly, H Kim

    2013-03-01

    Global climate change is projected to increase the frequency and duration of periods of extremely high temperatures. Both the general populace and public health authorities often underestimate the impact of high temperatures on human health. To highlight the vulnerable populations and illustrate approaches to minimization of health impacts of extreme heat, the authors reviewed the studies of heat-related morbidity and mortality for high-risk populations in the U.S. and Europe from 1958 to 2012. Heat exposure not only can cause heat exhaustion and heat stroke but also can exacerbate a wide range of medical conditions. Vulnerable populations, such as older adults; children; outdoor laborers; some racial and ethnic subgroups (particularly those with low SES); people with chronic diseases; and those who are socially or geographically isolated, have increased morbidity and mortality during extreme heat. In addition to ambient temperature, heat-related health hazards are exacerbated by air pollution, high humidity, and lack of air-conditioning. Consequently, a comprehensive approach to minimize the health effects of extreme heat is required and must address educating the public of the risks and optimizing heatwave response plans, which include improving access to environmentally controlled public havens, adaptation of social services to address the challenges required during extreme heat, and consistent monitoring of morbidity and mortality during periods of extreme temperatures. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  13. The re-emergence of the minimal running shoe.

    Science.gov (United States)

    Davis, Irene S

    2014-10-01

    The running shoe has gone through significant changes since its inception. The purpose of this paper is to review these changes, the majority of which have occurred over the past 50 years. Running footwear began as very minimal, then evolved to become highly cushioned and supportive. However, over the past 5 years, there has been a reversal of this trend, with runners seeking more minimal shoes that allow their feet more natural motion. This abrupt shift toward footwear without cushioning and support has led to reports of injuries associated with minimal footwear. In response to this, the running footwear industry shifted again toward the development of lightweight, partial minimal shoes that offer some support and cushioning. In this paper, studies comparing the mechanics between running in minimal, partial minimal, and traditional shoes are reviewed. The implications for injuries in all 3 conditions are examined. The use of minimal footwear in other populations besides runners is discussed. Finally, areas for future research into minimal footwear are suggested.

  14. Constraining the ensemble Kalman filter for improved streamflow forecasting

    Science.gov (United States)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also

  15. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  16. Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

    International Nuclear Information System (INIS)

    Bélanger, G.; Hugonie, C.; Pukhov, A.

    2009-01-01

    We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and susy particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC

  17. The properties of retrieval cues constrain the picture superiority effect.

    Science.gov (United States)

    Weldon, M S; Roediger, H L; Challis, B H

    1989-01-01

    In three experiments, we examined why pictures are remembered better than words on explicit memory tests like recall and recognition, whereas words produce more priming than pictures on some implicit tests, such as word-fragment and word-stem completion (e.g., completing -l-ph-nt or ele----- as elephant). One possibility is that pictures are always more accessible than words if subjects are given explicit retrieval instructions. An alternative possibility is that the properties of the retrieval cues themselves constrain the retrieval processes engaged; word fragments might induce data-driven (perceptually based) retrieval, which favors words regardless of the retrieval instructions. Experiment 1 demonstrated that words were remembered better than pictures on both the word-fragment and word-stem completion tasks under both implicit and explicit retrieval conditions. In Experiment 2, pictures were recalled better than words with semantically related extralist cues. In Experiment 3, when semantic cues were combined with word fragments, pictures and words were recalled equally well under explicit retrieval conditions, but words were superior to pictures under implicit instructions. Thus, the inherently data-limited properties of fragmented words limit their use in accessing conceptual codes. Overall, the results indicate that retrieval operations are largely determined by properties of the retrieval cues under both implicit and explicit retrieval conditions.

  18. A field theory description of constrained energy-dissipation processes

    International Nuclear Information System (INIS)

    Mandzhavidze, I.D.; Sisakyan, A.N.

    2002-01-01

    A field theory description of dissipation processes constrained by a high-symmetry group is given. The formalism is presented in the example of the multiple-hadron production processes, where the transition to the thermodynamic equilibrium results from the kinetic energy of colliding particles dissipating into hadron masses. The dynamics of these processes is restricted because the constraints responsible for the colour charge confinement must be taken into account. We develop a more general S-matrix formulation of the thermodynamics of nonequilibrium dissipative processes and find a necessary and sufficient condition for the validity of this description; this condition is similar to the correlation relaxation condition, which, according to Bogolyubov, must apply as the system approaches equilibrium. This situation must physically occur in processes with an extremely high multiplicity, at least if the hadron mass is nonzero. We also describe a new strong-coupling perturbation scheme, which is useful for taking symmetry restrictions on the dynamics of dissipation processes into account. We review the literature devoted to this problem

  19. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  20. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  1. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  2. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    Science.gov (United States)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  3. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  4. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  5. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  6. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  7. Resource Management in Constrained Dynamic Situations

    Science.gov (United States)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments

  8. On relevant boundary perturbations of unitary minimal models

    International Nuclear Information System (INIS)

    Recknagel, A.; Roggenkamp, D.; Schomerus, V.

    2000-01-01

    We consider unitary Virasoro minimal models on the disk with Cardy boundary conditions and discuss deformations by certain relevant boundary operators, analogous to tachyon condensation in string theory. Concentrating on the least relevant boundary field, we can perform a perturbative analysis of renormalization group fixed points. We find that the systems always flow towards stable fixed points which admit no further (non-trivial) relevant perturbations. The new conformal boundary conditions are in general given by superpositions of 'pure' Cardy boundary conditions

  9. Explaining evolution via constrained persistent perfect phylogeny

    Science.gov (United States)

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  10. Constraining Lipid Biomarker Paleoclimate Proxies in a Small Arctic Watershed

    Science.gov (United States)

    Dion-Kirschner, H.; McFarlin, J. M.; Axford, Y.; Osburn, M. R.

    2017-12-01

    Arctic amplification of climate change renders high-latitude environments unusually sensitive to changes in climatic conditions (Serreze and Barry, 2011). Lipid biomarkers, and their hydrogen and carbon isotopic compositions, can yield valuable paleoclimatic and paleoecological information. However, many variables affect the production and preservation of lipids and their constituent isotopes, including precipitation, plant growth conditions, biosynthesis mechanisms, and sediment depositional processes (Sachse et al., 2012). These variables are particularly poorly constrained for high-latitude environments, where trees are sparse or not present, and plants grow under continuous summer light and cool temperatures during a short growing season. Here we present a source-to-sink study of a single watershed from the Kangerlussuaq region of southwest Greenland. Our analytes from in and around `Little Sugarloaf Lake' (LSL) include terrestrial and aquatic plants, plankton, modern lake water, surface sediments, and a sediment core. This diverse sample set allows us to fulfill three goals: 1) We evaluate the production of lipids and isotopic signatures in the modern watershed in comparison to modern climate. Our data exhibit genus-level trends in leaf wax production and isotopic composition, and help clarify the difference between terrestrial and aquatic signals. 2) We evaluate the surface sediment of LSL to determine how lipid biomarkers from the watershed are incorporated into sediments. We constrain the relative contributions of terrestrial plants, aquatic plants, and other aquatic organisms to the sediment in this watershed. 3) We apply this modern source-to-sink calibration to the analysis of a 65 cm sediment core record. Our core is organic-rich, and relatively high deposition rates allow us to reconstruct paleoenvironmental changes with high resolution. Our work will help determine the veracity of these common paleoclimate proxies, specifically for research in

  11. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-05-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). Now legislation at the federal level is being introduced. Passage will result in new EPA regulations and also DOE orders. At the state level the Hazardous Waste Reduction and Management Review Act of 1989 was signed by the Governor. DHS is currently promulgating regulations to implement the new law. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements

  12. Symmetry breaking for drag minimization

    Science.gov (United States)

    Roper, Marcus; Squires, Todd M.; Brenner, Michael P.

    2005-11-01

    For locomotion at high Reynolds numbers drag minimization favors fore-aft asymmetric slender shapes with blunt noses and sharp trailing edges. On the other hand, in an inertialess fluid the drag experienced by a body is independent of whether it travels forward or backward through the fluid, so there is no advantage to having a single preferred swimming direction. In fact numerically determined minimum drag shapes are known to exhibit almost no fore-aft asymmetry even at moderate Re. We show that asymmetry persists, albeit extremely weakly, down to vanishingly small Re, scaling asymptotically as Re^3. The need to minimize drag to maximize speed for a given propulsive capacity gives one possible mechanism for the increasing asymmetry in the body plans seen in nature, as organisms increase in size and swimming speed from bacteria like E-Coli up to pursuit predator fish such as tuna. If it is the dominant mechanism, then this signature scaling will be observed in the shapes of motile micro-organisms.

  13. Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe

    Science.gov (United States)

    Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.

    2017-12-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.

  14. Depletion mapping and constrained optimization to support managing groundwater extraction

    Science.gov (United States)

    Fienen, Michael N.; Bradbury, Kenneth R.; Kniffin, Maribeth; Barlow, Paul M.

    2018-01-01

    Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping—showing the model-calculated potential impacts that wells have on stream baseflow—can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States—home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices.

  15. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  16. Bound constrained quadratic programming via piecewise

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun; Pinar, M. C.

    1999-01-01

    of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes the algorithm and its implementation including estimation of lambda/sub 1/ , how to get a good starting point for the iteration, and up- and downdating of Cholesky factorization. Results of extensive......We consider the strictly convex quadratic programming problem with bounded variables. A dual problem is derived using Lagrange duality. The dual problem is the minimization of an unconstrained, piecewise quadratic function. It involves a lower bound of lambda/sub 1/ , the smallest eigenvalue...

  17. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  18. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  19. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  20. Technologies for a greenhouse-constrained society

    International Nuclear Information System (INIS)

    Kuliasha, M.A.; Zucker, A.; Ballew, K.J.

    1992-01-01

    This conference explored how three technologies might help society adjust to life in a greenhouse-constrained environment. Technology experts and policy makers from around the world met June 11--13, 1991, in Oak Ridge, Tennessee, to address questions about how energy efficiency, biomass, and nuclear technologies can mitigate the greenhouse effect and to explore energy production and use in countries in various stages of development. The conference was organized by Oak Ridge National Laboratory and sponsored by the US Department of Energy. Energy efficiency biomass, and nuclear energy are potential substitutes for fossil fuels that might help slow or even reverse the global warming changes that may result from mankind's thirst for energy. Many other conferences have questioned whether the greenhouse effect is real and what reductions in greenhouse gas emissions might be necessary to avoid serious ecological consequences; this conference studied how these reductions might actually be achieved. For these conference proceedings, individuals papers are processed separately for the Energy Data Base

  1. Electricity in a Climate-Constrained World

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    After experiencing a historic drop in 2009, electricity generation reached a record high in 2010, confirming the close linkage between economic growth and electricity usage. Unfortunately, CO2 emissions from electricity have also resumed their growth: Electricity remains the single-largest source of CO2 emissions from energy, with 11.7 billion tonnes of CO2 released in 2010. The imperative to 'decarbonise' electricity and improve end-use efficiency remains essential to the global fight against climate change. The IEA’s Electricity in a Climate-Constrained World provides an authoritative resource on progress to date in this area, including statistics related to CO2 and the electricity sector across ten regions of the world (supply, end-use and capacity additions). It also presents topical analyses on the challenge of rapidly curbing CO2 emissions from electricity. Looking at policy instruments, it focuses on emissions trading in China, using energy efficiency to manage electricity supply crises and combining policy instruments for effective CO2 reductions. On regulatory issues, it asks whether deregulation can deliver decarbonisation and assesses the role of state-owned enterprises in emerging economies. And from technology perspectives, it explores the rise of new end-uses, the role of electricity storage, biomass use in Brazil, and the potential of carbon capture and storage for ‘negative emissions’ electricity supply.

  2. When ethics constrains clinical research: trial design of control arms in "greater than minimal risk" pediatric trials.

    Science.gov (United States)

    de Melo-Martín, Inmaculada; Sondhi, Dolan; Crystal, Ronald G

    2011-09-01

    For more than three decades clinical research in the United States has been explicitly guided by the idea that ethical considerations must be central to research design and practice. In spite of the centrality of this idea, attempting to balance the sometimes conflicting values of advancing scientific knowledge and protecting human subjects continues to pose challenges. Possible conflicts between the standards of scientific research and those of ethics are particularly salient in relation to trial design. Specifically, the choice of a control arm is an aspect of trial design in which ethical and scientific issues are deeply entwined. Although ethical quandaries related to the choice of control arms may arise when conducting any type of clinical trials, they are conspicuous in early phase gene transfer trials that involve highly novel approaches and surgical procedures and have children as the research subjects. Because of children's and their parents' vulnerabilities, in trials that investigate therapies for fatal, rare diseases affecting minors, the scientific and ethical concerns related to choosing appropriate controls are particularly significant. In this paper we use direct gene transfer to the central nervous system to treat late infantile neuronal ceroid lipofuscinosis to illustrate some of these ethical issues and explore possible solutions to real and apparent conflicts between scientific and ethical considerations.

  3. Minimal Coleman-Weinberg theory explains the diphoton excess

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Sannino, Francesco

    2016-01-01

    It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require the introdu......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...

  4. Supply curve bidding of electricity in constrained power networks

    International Nuclear Information System (INIS)

    Al-Agtash, Salem Y.

    2010-01-01

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  5. Supply curve bidding of electricity in constrained power networks

    Energy Technology Data Exchange (ETDEWEB)

    Al-Agtash, Salem Y. [Hijjawi Faculty of Engineering; Yarmouk University; Irbid 21163 (Jordan)

    2010-07-15

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  6. Laparoscopic colonic resection in inflammatory bowel disease: minimal surgery, minimal access and minimal hospital stay.

    LENUS (Irish Health Repository)

    Boyle, E

    2008-11-01

    Laparoscopic surgery for inflammatory bowel disease (IBD) is technically demanding but can offer improved short-term outcomes. The introduction of minimally invasive surgery (MIS) as the default operative approach for IBD, however, may have inherent learning curve-associated disadvantages. We hypothesise that the establishment of MIS as the standard operative approach does not increase patient morbidity as assessed in the initial period of its introduction into a specialised unit, and that it confers earlier postoperative gastrointestinal recovery and reduced hospitalisation compared with conventional open resection.

  7. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  8. Minimal size of a barchan dune

    Science.gov (United States)

    Parteli, E. J. R.; Durán, O.; Herrmann, H. J.

    2007-01-01

    Barchans are dunes of high mobility which have a crescent shape and propagate under conditions of unidirectional wind. However, sand dunes only appear above a critical size, which scales with the saturation distance of the sand flux [P. Hersen, S. Douady, and B. Andreotti, Phys. Rev. Lett. 89, 264301 (2002); B. Andreotti, P. Claudin, and S. Douady, Eur. Phys. J. B 28, 321 (2002); G. Sauermann, K. Kroy, and H. J. Herrmann, Phys. Rev. E 64, 31305 (2001)]. It has been suggested by P. Hersen, S. Douady, and B. Andreotti, Phys. Rev. Lett. 89, 264301 (2002)] that this flux fetch distance is itself constant. Indeed, this could not explain the protosize of barchan dunes, which often occur in coastal areas of high litoral drift, and the scale of dunes on Mars. In the present work, we show from three-dimensional calculations of sand transport that the size and the shape of the minimal barchan dune depend on the wind friction speed and the sand flux on the area between dunes in a field. Our results explain the common appearance of barchans a few tens of centimeter high which are observed along coasts. Furthermore, we find that the rate at which grains enter saltation on Mars is one order of magnitude higher than on Earth, and is relevant to correctly obtain the minimal dune size on Mars.

  9. Effective theory of flavor for Minimal Mirror Twin Higgs

    Science.gov (United States)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  10. Static elliptic minimal surfaces in AdS{sub 4}

    Energy Technology Data Exchange (ETDEWEB)

    Pastras, Georgios [NCSR ' ' Demokritos' ' , Institute of Nuclear and Particle Physics, Attiki (Greece)

    2017-11-15

    The Ryu-Takayanagi conjecture connects the entanglement entropy in the boundary CFT to the area of open co-dimension two minimal surfaces in the bulk. Especially in AdS{sub 4}, the latter are two-dimensional surfaces, and, thus, solutions of a Euclidean non-linear sigma model on a symmetric target space that can be reduced to an integrable system via Pohlmeyer reduction. In this work, we construct static minimal surfaces in AdS{sub 4} that correspond to elliptic solutions of the reduced system, namely the cosh-Gordon equation, via the inversion of Pohlmeyer reduction. The constructed minimal surfaces comprise a two-parameter family of surfaces that include helicoids and catenoids in H{sup 3} as special limits. Minimal surfaces that correspond to identical boundary conditions are discovered within the constructed family of surfaces and the relevant geometric phase transitions are studied. (orig.)

  11. Minimally invasive aortic valve replacement

    DEFF Research Database (Denmark)

    Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K

    2009-01-01

    In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy...... operations were completed as mini-sternotomies, 4 died later of noncardiac causes. The aortic cross-clamp and perfusion times were significantly different across all groups (P replacement...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...

  12. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  13. Strategies to Minimize Antibiotic Resistance

    Directory of Open Access Journals (Sweden)

    Sang Hee Lee

    2013-09-01

    Full Text Available Antibiotic resistance can be reduced by using antibiotics prudently based on guidelines of antimicrobial stewardship programs (ASPs and various data such as pharmacokinetic (PK and pharmacodynamic (PD properties of antibiotics, diagnostic testing, antimicrobial susceptibility testing (AST, clinical response, and effects on the microbiota, as well as by new antibiotic developments. The controlled use of antibiotics in food animals is another cornerstone among efforts to reduce antibiotic resistance. All major resistance-control strategies recommend education for patients, children (e.g., through schools and day care, the public, and relevant healthcare professionals (e.g., primary-care physicians, pharmacists, and medical students regarding unique features of bacterial infections and antibiotics, prudent antibiotic prescribing as a positive construct, and personal hygiene (e.g., handwashing. The problem of antibiotic resistance can be minimized only by concerted efforts of all members of society for ensuring the continued efficiency of antibiotics.

  14. A minimally invasive smile enhancement.

    Science.gov (United States)

    Peck, Fred H

    2014-01-01

    Minimally invasive dentistry refers to a wide variety of dental treatments. On the restorative aspect of dental procedures, direct resin bonding can be a very conservative treatment option for the patient. When tooth structure does not need to be removed, the patient benefits. Proper treatment planning is essential to determine how conservative the restorative treatment will be. This article describes the diagnosis, treatment options, and procedural techniques in the restoration of 4 maxillary anterior teeth with direct composite resin. The procedural steps are reviewed with regard to placing the composite and the variety of colors needed to ensure a natural result. Finishing and polishing of the composite are critical to ending with a natural looking dentition that the patient will be pleased with for many years.

  15. A discretized algorithm for the solution of a constrained, continuous ...

    African Journals Online (AJOL)

    A discretized algorithm for the solution of a constrained, continuous quadratic control problem. ... The results obtained show that the Discretized constrained algorithm (DCA) is much more accurate and more efficient than some of these techniques, particularly the FSA. Journal of the Nigerian Association of Mathematical ...

  16. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  17. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  18. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  19. Perceptions of Sexual Orientation From Minimal Cues.

    Science.gov (United States)

    Rule, Nicholas O

    2017-01-01

    People derive considerable amounts of information about each other from minimal nonverbal cues. Apart from characteristics typically regarded as obvious when encountering another person (e.g., age, race, and sex), perceivers can identify many other qualities about a person that are typically rather subtle. One such feature is sexual orientation. Here, I review the literature documenting the accurate perception of sexual orientation from nonverbal cues related to one's adornment, acoustics, actions, and appearance. In addition to chronicling studies that have demonstrated how people express and extract sexual orientation in each of these domains, I discuss some of the basic cognitive and perceptual processes that support these judgments, including how cues to sexual orientation manifest in behavioral (e.g., clothing choices) and structural (e.g., facial morphology) signals. Finally, I attend to boundary conditions in the accurate perception of sexual orientation, such as the states, traits, and group memberships that moderate individuals' ability to reliably decipher others' sexual orientation.

  20. Minimally invasive hysterectomy in Coatis ( Nasua nasua

    Directory of Open Access Journals (Sweden)

    Bruno W. Minto

    Full Text Available ABSTRACT: Some wildlife species, such as coatis, have a high degree of adaptability to adverse conditions, such as fragmented urban forests, increasingly common on the world stage. The increase in the number of these mesopredators causes drastic changes in the communities of smaller predators, interferes with reproductive success of trees, as well as becoming a form of exchange between domestic and wild areas, favoring the transmission of zoonosis and increasing the occurrence of attacks to animals or people. This report describes the use of minimally invasive hysterectomy in two individuals of the species Nasua nasua, which can be accomplished through the use of hook technique, commonly used to castrate dogs and cats. The small incision and healing speed of incised tissues are fundamental in wild life management since the postoperative care is limited by the behavior of these animals. This technique proved to be effective and can greatly reduce the morbidity of this procedure in coatis.

  1. The minimal work cost of information processing

    Science.gov (United States)

    Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato

    2015-07-01

    Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.

  2. Venus Surface Composition Constrained by Observation and Experiment

    Science.gov (United States)

    Gilmore, Martha; Treiman, Allan; Helbert, Jörn; Smrekar, Suzanne

    2017-11-01

    New observations from the Venus Express spacecraft as well as theoretical and experimental investigation of Venus analogue materials have advanced our understanding of the petrology of Venus melts and the mineralogy of rocks on the surface. The VIRTIS instrument aboard Venus Express provided a map of the southern hemisphere of Venus at ˜1 μm allowing, for the first time, the definition of surface units in terms of their 1 μm emissivity and derived mineralogy. Tessera terrain has lower emissivity than the presumably basaltic plains, consistent with a more silica-rich or felsic mineralogy. Thermodynamic modeling and experimental production of melts with Venera and Vega starting compositions predict derivative melts that range from mafic to felsic. Large volumes of felsic melts require water and may link the formation of tesserae to the presence of a Venus ocean. Low emissivity rocks may also be produced by atmosphere-surface weathering reactions unlike those seen presently. High 1 μm emissivity values correlate to stratigraphically recent flows and have been used with theoretical and experimental predictions of basalt weathering to identify regions of recent volcanism. The timescale of this volcanism is currently constrained by the weathering of magnetite (higher emissivity) in fresh basalts to hematite (lower emissivity) in Venus' oxidizing environment. Recent volcanism is corroborated by transient thermal anomalies identified by the VMC instrument aboard Venus Express. The interpretation of all emissivity data depends critically on understanding the composition of surface materials, kinetics of rock weathering and their measurement under Venus conditions. Extended theoretical studies, continued analysis of earlier spacecraft results, new atmospheric data, and measurements of mineral stability under Venus conditions have improved our understanding atmosphere-surface interactions. The calcite-wollastonite CO2 buffer has been discounted due, among other things, to

  3. Regional Responses to Constrained Water Availability

    Science.gov (United States)

    Cui, Y.; Calvin, K. V.; Hejazi, M. I.; Clarke, L.; Kim, S. H.; Patel, P.

    2017-12-01

    There have been many concerns about water as a constraint to agricultural production, electricity generation, and many other human activities in the coming decades. Nevertheless, how different countries/economies would respond to such constraints has not been explored. Here, we examine the responding mechanism of binding water availability constraints at the water basin level and across a wide range of socioeconomic, climate and energy technology scenarios. Specifically, we look at the change in water withdrawals between energy, land-use and other sectors within an integrated framework, by using the Global Change Assessment Model (GCAM) that also endogenizes water use and allocation decisions based on costs. We find that, when water is taken into account as part of the production decision-making, countries/basins in general fall into three different categories, depending on the change of water withdrawals and water re-allocation between sectors. First, water is not a constraining factor for most of the basins. Second, advancements in water-saving technologies of the electricity generation cooling systems are sufficient of reducing water withdrawals to meet binding water availability constraints, such as in China and the EU-15. Third, water-saving in the electricity sector alone is not sufficient and thus cannot make up the lowered water availability from the binding case; for example, many basins in Pakistan, Middle East and India have to largely reduce irrigated water withdrawals by either switching to rain-fed agriculture or reducing production. The dominant responding strategy for individual countries/basins is quite robust across the range of alternate scenarios that we test. The relative size of water withdrawals between energy and agriculture sectors is one of the most important factors that affect the dominant mechanism.

  4. Constraining Cosmic Evolution of Type Ia Supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  5. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  6. Initial conditions for cosmological perturbations

    Science.gov (United States)

    Ashtekar, Abhay; Gupt, Brajesh

    2017-02-01

    Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations.

  7. Initial conditions for cosmological perturbations

    International Nuclear Information System (INIS)

    Ashtekar, Abhay; Gupt, Brajesh

    2017-01-01

    Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations . (paper)

  8. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  9. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  10. Data-Driven Security-Constrained OPF

    DEFF Research Database (Denmark)

    Thams, Florian; Halilbasic, Lejla; Pinson, Pierre

    2017-01-01

    considerations, while being less conservative than current approaches. Our approach can be scalable for large systems, accounts explicitly for power system security, and enables the electricity market to identify a cost-efficient dispatch avoiding redispatching actions. We demonstrate the performance of our......In this paper we unify electricity market operations with power system security considerations. Using data-driven techniques, we address both small signal stability and steady-state security, derive tractable decision rules in the form of line flow limits, and incorporate the resulting constraints...... in market clearing algorithms. Our goal is to minimize redispatching actions, and instead allow the market to determine the most cost-efficient dispatch while considering all security constraints. To maintain tractability of our approach we perform our security assessment offline, examining large datasets...

  11. Sparseness- and continuity-constrained seismic imaging

    Science.gov (United States)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  12. Auction dynamics: A volume constrained MBO scheme

    Science.gov (United States)

    Jacobs, Matt; Merkurjev, Ekaterina; Esedoǧlu, Selim

    2018-02-01

    We show how auction algorithms, originally developed for the assignment problem, can be utilized in Merriman, Bence, and Osher's threshold dynamics scheme to simulate multi-phase motion by mean curvature in the presence of equality and inequality volume constraints on the individual phases. The resulting algorithms are highly efficient and robust, and can be used in simulations ranging from minimal partition problems in Euclidean space to semi-supervised machine learning via clustering on graphs. In the case of the latter application, numerous experimental results on benchmark machine learning datasets show that our approach exceeds the performance of current state-of-the-art methods, while requiring a fraction of the computation time.

  13. Gravitational waves in Fully Constrained Formulation in a dynamical spacetime with matter content

    Energy Technology Data Exchange (ETDEWEB)

    Cordero-Carrion, Isabel; Cerda-Duran, Pablo [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85741, Garching (Germany); Ibanez, Jose MarIa, E-mail: chabela@mpa-garching.mpg.de, E-mail: cerda@mpa-garching.mpg.de, E-mail: jose.m.ibanez@uv.es [Departamento de AstronomIa y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain)

    2011-09-22

    We analyze numerically the behaviour of the hyperbolic sector of the Fully Constrained Formulation (FCF) (Bonazzola et al. 2004). The numerical experiments allow us to be confident in the performances of the upgraded version of the CoCoNuT code (Dimmelmeier et al. 2005) by replacing the Conformally Flat Condition (CFC), an approximation of Einstein equations, by FCF. First gravitational waves in FCF in a dynamical spacetime with matter content will be shown.

  14. Cyclone Simulation via Action Minimization

    Science.gov (United States)

    Plotkin, D. A.; Weare, J.; Abbot, D. S.

    2016-12-01

    A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for

  15. Singlet fermionic dark matter with Veltman conditions

    Science.gov (United States)

    Kim, Yeong Gyun; Lee, Kang Young; Nam, Soo-hyeon

    2018-07-01

    We reexamine a renormalizable model of a fermionic dark matter with a gauge singlet Dirac fermion and a real singlet scalar which can ameliorate the scalar mass hierarchy problem of the Standard Model (SM). Our model setup is the minimal extension of the SM for which a realistic dark matter (DM) candidate is provided and the cancellation of one-loop quadratic divergence to the scalar masses can be achieved by the Veltman condition (VC) simultaneously. This model extension, although renormalizable, can be considered as an effective low-energy theory valid up to cut-off energies about 10 TeV. We calculate the one-loop quadratic divergence contributions of the new scalar and fermionic DM singlets, and constrain the model parameters using the VC and the perturbative unitarity conditions. Taking into account the invisible Higgs decay measurement, we show the allowed region of new physics parameters satisfying the recent measurement of relic abundance. With the obtained parameter set, we predict the elastic scattering cross section of the new singlet fermion into target nuclei for a direct detection of the dark matter. We also perform the full analysis with arbitrary set of parameters without the VC as a comparison, and discuss the implication of the constraints by the VC in detail.

  16. A Heuristic Algorithm for Constrain Single-Source Problem with Constrained Customers

    Directory of Open Access Journals (Sweden)

    S. A. Raisi Dehkordi∗

    2012-09-01

    Full Text Available The Fermat-Weber location problem is to find a point in R n that minimizes the sum of the weighted Euclidean distances from m given points in R n . In this paper we consider the Fermat-Weber problem of one new facilitiy with respect to n unknown customers in order to minimizing the sum of transportation costs between this facility and the customers. We assumed that each customer is located in a nonempty convex closed bounded subset of R n .

  17. Minimal theory of quantum electrodynamics

    International Nuclear Information System (INIS)

    Berrondo, M.; Jauregui, R.

    1986-01-01

    Within the general framework of the Lehmann-Symanzik-Zimmermann axiomatic field theory, we obtain a simple and coherent formulation of quantum electrodynamics. The definitions of the current densities fulfill the one-particle stability condition, and the commutation relations for the interacting fields are obtained rather than being postulated a priori, thus avoiding the inconsistencies which appear in the canonical formalism. This is possible due to the fact that we use the integral form of the equations of motion in order to compute the propagators and the S matrix. The resulting spectral representations automatically fulfill the correct boundary conditions thus fixing the ubiquitous quasilocal operators in a unique fashion

  18. Minimalism through intraoperative functional mapping.

    Science.gov (United States)

    Berger, M S

    1996-01-01

    Intraoperative stimulation mapping may be used to avoid unnecessary risk to functional regions subserving language and sensori-motor pathways. Based on the data presented here, language localization is variable in the entire population, with only certainty existing for the inferior frontal region responsible for motor speech. Anatomical landmarks such as the anterior temporal tip for temporal lobe language sites and the posterior aspect of the lateral sphenoid wing for the frontal lobe language zones are unreliable in avoiding postoperative aphasias. Thus, individual mapping to identify essential language sites has the greatest likelihood of avoiding permanent deficits in naming, reading, and motor speech. In a similar approach, motor and sensory pathways from the cortex and underlying white matter may be reliably stimulated and mapped in both awake and asleep patients. Although these techniques require an additional operative time and equipment nominally priced, the result is often gratifying, as postoperative morbidity has been greatly reduced in the process of incorporating these surgical strategies. The patients quality of life is improved in terms of seizure control, with or without antiepileptic drugs. This avoids having to perform a second costly operative procedure, which is routinely done when extraoperative stimulation and recording is done via subdural grids. In addition, an aggressive tumor resection at the initial operation lengthens the time to tumor recurrence and often obviates the need for a subsequent reoperation. Thus, intraoperative functional mapping may be best alluded to as a surgical technique that results in "minimalism in the long term".

  19. Against explanatory minimalism in psychiatry

    Directory of Open Access Journals (Sweden)

    Tim eThornton

    2015-12-01

    Full Text Available The idea that psychiatry contains, in principle, a series of levels of explanation has been criticised both as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell’s criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation respectively and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein’s Zettel. But attention to the context of Wittgenstein’s remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of level of explanation. Only in a context broader than the one provided by interventionism is the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  20. Against Explanatory Minimalism in Psychiatry.

    Science.gov (United States)

    Thornton, Tim

    2015-01-01

    The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  1. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  2. Minimizing the Pervasiveness of Women's Personal Experiences of Gender Discrimination

    Science.gov (United States)

    Foster, Mindi D.; Jackson, Lydia C.; Hartmann, Ryan; Woulfe, Shannon

    2004-01-01

    Given the Rejection-Identification Model (Branscombe, Schmitt, & Harvey, 1999), which shows that perceiving discrimination to be pervasive is a negative experience, it was suggested that there would be conditions under which women would instead minimize the pervasiveness of discrimination. Study 1 (N= 91) showed that when women envisioned…

  3. Robust media processing on programmable power-constrained systems

    Science.gov (United States)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  4. Constrained Unfolding of a Helical Peptide: Implicit versus Explicit Solvents.

    Directory of Open Access Journals (Sweden)

    Hailey R Bureau

    Full Text Available Steered Molecular Dynamics (SMD has been seen to provide the potential of mean force (PMF along a peptide unfolding pathway effectively but at significant computational cost, particularly in all-atom solvents. Adaptive steered molecular dynamics (ASMD has been seen to provide a significant computational advantage by limiting the spread of the trajectories in a staged approach. The contraction of the trajectories at the end of each stage can be performed by taking a structure whose nonequilibrium work is closest to the Jarzynski average (in naive ASMD or by relaxing the trajectories under a no-work condition (in full-relaxation ASMD--namely, FR-ASMD. Both approaches have been used to determine the energetics and hydrogen-bonding structure along the pathway for unfolding of a benchmark peptide initially constrained as an α-helix in a water environment. The energetics are quite different to those in vacuum, but are found to be similar between implicit and explicit solvents. Surprisingly, the hydrogen-bonding pathways are also similar in the implicit and explicit solvents despite the fact that the solvent contact plays an important role in opening the helix.

  5. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  6. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    Science.gov (United States)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  7. Inversion of Love wave phase velocity using smoothness-constrained least-squares technique; Heikatsuka seiyakutsuki saisho jijoho ni yoru love ha iso sokudo no inversion

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)

    1996-10-01

    Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.

  8. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  9. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  10. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  11. Chance-Constrained Guidance With Non-Convex Constraints

    Science.gov (United States)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of

  12. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  13. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  14. Network constrained wind integration on Vancouver Island

    International Nuclear Information System (INIS)

    Maddaloni, Jesse D.; Rowe, Andrew M.; Kooten, G. Cornelis van

    2008-01-01

    The aim of this study is to determine the costs and carbon emissions associated with operating a hydro-dominated electricity generation system (Vancouver Island, Canada) with varying degrees of wind penetration. The focus is to match the wind resource, system demand and abilities of extant generating facilities on a temporal basis, resulting in an operating schedule that minimizes system cost over a given period. This is performed by taking the perspective of a social planner who desires to find the lowest-cost mix of new and existing generation facilities. Unlike other studies, this analysis considers variable efficiency for thermal and hydro-generators, resulting in a fuel cost that varies with respect to generator part load. Since this study and others have shown that wind power may induce a large variance on existing dispatchable generators, forcing more frequent operation at reduced part load, inclusion of increased fuel cost at part load is important when investigating wind integration as it can significantly reduce the economic benefits of utilizing low-cost wind. Results indicate that the introduction of wind power may reduce system operating costs, but this depends heavily on whether the capital cost of the wind farm is considered. For the Vancouver Island mix with its large hydro-component, operating cost was reduced by a maximum of 15% at a wind penetration of 50%, with a negligible reduction in operating cost when the wind farm capital cost was included

  15. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

    Directory of Open Access Journals (Sweden)

    Stephan Zentner

    2014-02-01

    Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

  16. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  17. Recent Theoretical Approaches to Minimal Artificial Cells

    Directory of Open Access Journals (Sweden)

    Fabio Mavelli

    2014-05-01

    Full Text Available Minimal artificial cells (MACs are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes… play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell systems.

  18. Desarrollo de un modelo generalizado para realimentación de fuerza y torque en cirugía cardiotorácica robótica mínimamente invasiva: determinación de condiciones y restricciones Development of a generalized model for force and torque feedback in robotic minimally invasive cardiothoracic surgery: identification of conditions and restrictions

    Directory of Open Access Journals (Sweden)

    Vera Pérez

    2011-07-01

    : requerimientos de los sensores de fuerza y relación necesaria entre el número de sensores y actuadores para realimentar fuerza en MICS robótica. Posteriormente se implementaron dichas consideraciones en un simulador y se verificó el cumplimiento de las mismas. CONCLUSIONES: las condiciones relacionadas con la incorporación de un sensor de fuerza y la percepción del cirujano en cuanto al tacto y la fuerza aplicada, resultan ser importantes en procedimientos de MICS robótica y requiere la inclusión de un sistema de control que permita la optimización de procedimientos por telepresencia.INTRODUCTION: the procedures in minimally invasive cardiothoracic surgery (MICS aim to reduce the complications of major dissections. However, in the absence of direct contact of the surgeon with the tissue, he receives a partial sense of touch and strength, which can lead to procedural errors, inadequate force applied to the tissue and fatigue during surgery. The inclusion of robotic devices with the MICS technique has enhanced the technical skills of the surgeon to manipulate tissue, and although the market devices still do not have tactile feedback, research in robotic prototypes that incorporate feedback of force and torque is being done. OBJECTIVE: to propose the conditions and restrictions related to the integration of force and torque feedback in robotics MICS applicable to different configurations of manipulators and analyze the implementation of those conditions in a surgical simulator. MATERIAL AND METHODS: from the analysis of needs during cardiothoracic procedures and conditions of minimally invasive surgery, we identified the requirements to ensure reflection of force and performed a mathematical analysis of such considerations. Finally, mathematical analysis were verified by modeling and simulation techniques using the Matlab® computing platform. RESULTS: three types of considerations were argued: a Kinematic: the existence of a fixed point; the way to guarantee it for

  19. Carbon-constrained scenarios. Final report

    International Nuclear Information System (INIS)

    2009-05-01

    This report provides the results of the study entitled 'Carbon-Constrained Scenarios' that was funded by FONDDRI from 2004 to 2008. The study was achieved in four steps: (i) Investigating the stakes of a strong carbon constraint for the industries participating in the study, not only looking at the internal decarbonization potential of each industry but also exploring the potential shifts of the demand for industrial products. (ii) Developing an hybrid modelling platform based on a tight dialog between the sectoral energy model POLES and the macro-economic model IMACLIM-R, in order to achieve a consistent assessment of the consequences of an economy-wide carbon constraint on energy-intensive industrial sectors, while taking into account technical constraints, barriers to the deployment of new technologies and general economic equilibrium effects. (iii) Producing several scenarios up to 2050 with different sets of hypotheses concerning the driving factors for emissions - in particular the development styles. (iv) Establishing an iterative dialog between researchers and industry representatives on the results of the scenarios so as to improve them, but also to facilitate the understanding and the appropriate use of these results by the industrial partners. This report provides the results of the different scenarios computed in the course of the project. It is a partial synthesis of the work that has been accomplished and of the numerous exchanges that this study has induced between modellers and stakeholders. The first part was written in April 2007 and describes the first reference scenario and the first mitigation scenario designed to achieve stabilization at 450 ppm CO 2 at the end of the 21. century. This scenario has been called 'mimetic' because it has been build on the assumption that the ambitious climate policy would coexist with a progressive convergence of development paths toward the current paradigm of industrialized countries: urban sprawl, general

  20. Loss Minimization Sliding Mode Control of IPM Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Mehran Zamanifar

    2010-01-01

    Full Text Available In this paper, a nonlinear loss minimization control strategy for an interior permanent magnet synchronous motor (IPMSM based on a newly developed sliding mode approach is presented. This control method sets force the speed control of the IPMSM drives and simultaneously ensures the minimization of the losses besides the uncertainties exist in the system such as parameter variations which have undesirable effects on the controller performance except at near nominal conditions. Simulation results are presented to show the effectiveness of the proposed controller.

  1. Hoelder continuity of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, Taoufik

    2004-10-01

    The goal of the present paper is to establish some kind of regularity of an energy minimizer map between Riemannian polyhedra. More precisely, we will show the Hoelder continuity of local energy minimizers between Riemannian polyhedra with the target spaces without focal points. With this new result, we also complete our existence theorem obtained elsewhere, and consequently we generalize completely, to the case of target polyhedra without focal points (which is a weaker geometric condition than the nonpositivity of the curvature), the Eells-Fuglede's existence and regularity theorem which is the new version of the famous Eells-Sampson's theorem. (author)

  2. FXR agonist activity of conformationally constrained analogs of GW 4064.

    Science.gov (United States)

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y; Caldwell, Richard D; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Bruce Wisely, G

    2009-08-15

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  3. Automated Precision Maneuvering and Landing in Extreme and Constrained Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous, precise maneuvering and landing in extreme and constrained environments is a key enabler for future NASA missions. Missions to map the interior of a...

  4. Security constrained optimal power flow by modern optimization tools

    African Journals Online (AJOL)

    Security constrained optimal power flow by modern optimization tools. ... International Journal of Engineering, Science and Technology ... If you would like more information about how to print, save, and work with PDFs, Highwire Press ...

  5. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  6. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  7. Synthesis of conformationally constrained peptidomimetics using multicomponent reactions

    NARCIS (Netherlands)

    Scheffelaar, R.; Klein Nijenhuis, R.A.; Paravidino, M.; Lutz, M.; Spek, A.L.; Ehlers, A.W.; de Kanter, F.J.J.; Groen, M.B.; Orru, R.V.A.; Ruijter, E.

    2009-01-01

    A novel modular synthetic approach toward constrained peptidomimetics is reported. The approach involves a highly efficient three-step sequence including two multicomponent reactions, thus allowing unprecedented diversification of both the peptide moieties and the turn-inducing scaffold. The

  8. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  9. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  10. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael

    2015-01-01

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  11. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  12. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  13. Cosmic inflation constrains scalar dark matter

    Directory of Open Access Journals (Sweden)

    Tommi Tenkanen

    2015-12-01

    Full Text Available In a theory containing scalar fields, a generic consequence is a formation of scalar condensates during cosmic inflation. The displacement of scalar fields out from their vacuum values sets specific initial conditions for post-inflationary dynamics and may lead to significant observational ramifications. In this work, we investigate how these initial conditions affect the generation of dark matter in the class of portal scenarios where the standard model fields feel new physics only through Higgs-mediated couplings. As a representative example, we will consider a $ Z_2 $ symmetric scalar singlet $ s $ coupled to Higgs via $ \\lambda \\Phi ^\\dagger \\Phi s^2 $. This simple extension has interesting consequences as the singlet constitutes a dark matter candidate originating from non-thermal production of singlet particles out from a singlet condensate, leading to a novel interplay between inflationary dynamics and dark matter properties.

  14. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  15. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  16. Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Box-Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2014-01-01

    descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.

  17. [Minimally invasive approach for cervical spondylotic radiculopathy].

    Science.gov (United States)

    Ding, Liang; Sun, Taicun; Huang, Yonghui

    2010-01-01

    To summarize the recent minimally invasive approach for cervical spondylotic radiculopathy (CSR). The recent literature at home and abroad concerning minimally invasive approach for CSR was reviewed and summarized. There were two techniques of minimally invasive approach for CSR at present: percutaneous puncture techniques and endoscopic techniques. The degenerate intervertebral disc was resected or nucleolysis by percutaneous puncture technique if CSR was caused by mild or moderate intervertebral disc herniations. The cervical microendoscopic discectomy and foraminotomy was an effective minimally invasive approach which could provide a clear view. The endoscopy techniques were suitable to treat CSR caused by foraminal osteophytes, lateral disc herniations, local ligamentum flavum thickening and spondylotic foraminal stenosis. The minimally invasive procedure has the advantages of simple handling, minimally invasive and low incidence of complications. But the scope of indications is relatively narrow at present.

  18. Performance potential of mechanical ventilation systems with minimized pressure loss

    DEFF Research Database (Denmark)

    Terkildsen, Søren; Svendsen, Svend

    2013-01-01

    simulations that quantify fan power consumption, heating demand and indoor environmental conditions. The system was designed with minimal pressure loss in the duct system and heat exchanger. Also, it uses state-of-the-art components such as electrostatic precipitators, diffuse ceiling inlets and demand......In many locations mechanical ventilation has been the most widely used principle of ventilation over the last 50 years but the conventional system design must be revised to comply with future energy requirements. This paper examines the options and describes a concept for the design of mechanical...... ventilation systems with minimal pressure loss and minimal energy use. This can provide comfort ventilation and avoid overheating through increased ventilation and night cooling. Based on this concept, a test system was designed for a fictive office building and its performance was documented using building...

  19. Minimalism context-aware displays.

    Science.gov (United States)

    Cai, Yang

    2004-12-01

    Despite the rapid development of cyber technologies, today we still have very limited attention and communication bandwidth to process the increasing information flow. The goal of the study is to develop a context-aware filter to match the information load with particular needs and capacities. The functions include bandwidth-resolution trade-off and user context modeling. From the empirical lab studies, it is found that the resolution of images can be reduced in order of magnitude if the viewer knows that he/she is looking for particular features. The adaptive display queue is optimized with real-time operational conditions and user's inquiry history. Instead of measuring operator's behavior directly, ubiquitous computing models are developed to anticipate user's behavior from the operational environment data. A case study of the video stream monitoring for transit security is discussed in the paper. In addition, the author addresses the future direction of coherent human-machine vision systems.

  20. Corporate tax minimization and stock price reactions

    OpenAIRE

    Blaufus, Kay; Möhlmann, Axel; Schwäbe, Alexander

    2016-01-01

    Tax minimization strategies may lead to significant tax savings, which could, in turn, increase firm value. However, such strategies are also associated with significant costs, such as expected penalties and planning, agency, and reputation costs. The overall impact of firms' tax minimization strategies on firm value is, therefore, unclear. To investigate whether corporate tax minimization increases firm value, we analyze the stock price reaction to news concerning corporate tax avoidance or ...

  1. Constrained choices? Linking employees' and spouses' work time to health behaviors.

    Science.gov (United States)

    Fan, Wen; Lam, Jack; Moen, Phyllis; Kelly, Erin; King, Rosalind; McHale, Susan

    2015-02-01

    There are extensive literatures on work conditions and health and on family contexts and health, but less research asking how a spouse or partners' work conditions may affect health behaviors. Drawing on the constrained choices framework, we theorized health behaviors as a product of one's own time and spouses' work time as well as gender expectations. We examined fast food consumption and exercise behaviors using survey data from 429 employees in an Information Technology (IT) division of a U.S. Fortune 500 firm and from their spouses. We found fast food consumption is affected by men's work hours-both male employees' own work hours and the hours worked by husbands of women respondents-in a nonlinear way. The groups most likely to eat fast food are men working 50 h/week and women whose husbands work 45-50 h/week. Second, exercise is better explained if work time is conceptualized at the couple, rather than individual, level. In particular, neo-traditional arrangements (where husbands work longer than their wives) constrain women's ability to engage in exercise but increase odds of men exercising. Women in couples where both partners are working long hours have the highest odds of exercise. In addition, women working long hours with high schedule control are more apt to exercise and men working long hours whose wives have high schedule flexibility are as well. Our findings suggest different health behaviors may have distinct antecedents but gendered work-family expectations shape time allocations in ways that promote men's and constrain women's health behaviors. They also suggest the need to expand the constrained choices framework to recognize that long hours may encourage exercise if both partners are looking to sustain long work hours and that work resources, specifically schedule control, of one partner may expand the choices of the other. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The cost of proactive interference is constant across presentation conditions.

    Science.gov (United States)

    Endress, Ansgar D; Siddique, Aneela

    2016-10-01

    Proactive interference (PI) severely constrains how many items people can remember. For example, Endress and Potter (2014a) presented participants with sequences of everyday objects at 250ms/picture, followed by a yes/no recognition test. They manipulated PI by either using new images on every trial in the unique condition (thus minimizing PI among items), or by re-using images from a limited pool for all trials in the repeated condition (thus maximizing PI among items). In the low-PI unique condition, the probability of remembering an item was essentially independent of the number of memory items, showing no clear memory limitations; more traditional working memory-like memory limitations appeared only in the high-PI repeated condition. Here, we ask whether the effects of PI are modulated by the availability of long-term memory (LTM) and verbal resources. Participants viewed sequences of 21 images, followed by a yes/no recognition test. Items were presented either quickly (250ms/image) or sufficiently slowly (1500ms/image) to produce LTM representations, either with or without verbal suppression. Across conditions, participants performed better in the unique than in the repeated condition, and better for slow than for fast presentations. In contrast, verbal suppression impaired performance only with slow presentations. The relative cost of PI was remarkably constant across conditions: relative to the unique condition, performance in the repeated condition was about 15% lower in all conditions. The cost of PI thus seems to be a function of the relative strength or recency of target items and interfering items, but relatively insensitive to other experimental manipulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Time constrained liner shipping network design

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Brouer, Berit Dangaard; Desaulniers, Guy

    2017-01-01

    We present a mathematical model and a solution method for the liner shipping network design problem. The model takes into account coordination between vessels and transit time restrictions on the cargo flow. The solution method is an improvement heuristic, where an integer program is solved...... iteratively to perform moves in a large neighborhood search. Our improvement heuristic is applicable as a real-time decision support tool for a liner shipping company. It can be used to find improvements to the network when evaluating changes in operating conditions or testing different scenarios...

  4. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  5. Safety control and minimization of radioactive wastes

    International Nuclear Information System (INIS)

    Wang Jinming; Rong Feng; Li Jinyan; Wang Xin

    2010-01-01

    Compared with the developed countries, the safety control and minimization of the radwastes in China are under-developed. The research of measures for the safety control and minimization of the radwastes is very important for the safety control of the radwastes, and the reduction of the treatment and disposal cost and environment radiation hazards. This paper has systematically discussed the safety control and the minimization of the radwastes produced in the nuclear fuel circulation, nuclear technology applications and the process of decommission of nuclear facilities, and has provided some measures and methods for the safety control and minimization of the radwastes. (authors)

  6. Constraining East Antarctic mass trends using a Bayesian inference approach

    Science.gov (United States)

    Martin-Español, Alba; Bamber, Jonathan L.

    2016-04-01

    East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (solution that satisfies all the input data, given these constraints. By imposing these conditions, over the period 2003-13 we obtained a mass

  7. Minimal N=4 topologically massive supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Kuzenko, Sergei M. [School of Physics M013, The University of Western Australia,35 Stirling Highway, Crawley W.A. 6009 (Australia); Novak, Joseph [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut,Am Mühlenberg 1, D-14476 Golm (Germany); Sachs, Ivo [Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität,Theresienstraße 37, D-80333 München (Germany)

    2017-03-21

    Using the superconformal framework, we construct a new off-shell model for N=4 topologically massive supergravity which is minimal in the sense that it makes use of a single compensating vector multiplet and involves no free parameter. As such, it provides a counterexample to the common lore that two compensating multiplets are required within the conformal approach to supergravity with eight supercharges in diverse dimensions. This theory is an off-shell N=4 supersymmetric extension of chiral gravity. All of its solutions correspond to non-conformally flat superspaces. Its maximally supersymmetric solutions include the so-called critical (4,0) anti-de Sitter superspace introduced in https://www.doi.org/10.1007/JHEP08(2012)024, and well as warped critical (4,0) anti-de Sitter superspaces. We also propose a dual formulation for the theory in which the vector multiplet is replaced with an off-shell hypermultiplet. Upon elimination of the auxiliary fields belonging to the hypermultiplet and imposing certain gauge conditions, the dual action reduces to the one introduced in https://www.doi.org/10.1103/PhysRevD.94.065028.

  8. Operational cost minimization in cooling water systems

    Directory of Open Access Journals (Sweden)

    Castro M.M.

    2000-01-01

    Full Text Available In this work, an optimization model that considers thermal and hydraulic interactions is developed for a cooling water system. It is a closed loop consisting of a cooling tower unit, circulation pump, blower and heat exchanger-pipe network. Aside from process disturbances, climatic fluctuations are considered. Model constraints include relations concerning tower performance, air flowrate requirement, make-up flowrate, circulating pump performance, heat load in each cooler, pressure drop constraints and climatic conditions. The objective function is operating cost minimization. Optimization variables are air flowrate, forced water withdrawal upstream the tower, and valve adjustment in each branch. It is found that the most significant operating cost is related to electricity. However, for cooled water temperatures lower than a specific target, there must be a forced withdrawal of circulating water and further makeup to enhance the cooling tower capacity. Additionally, the system is optimized along the months. The results corroborate the fact that the most important variable on cooling tower performance is not the air temperature itself, but its humidity.

  9. Minimally Invasive Management of Ectopic Pancreas.

    Science.gov (United States)

    Vitiello, Gerardo A; Cavnar, Michael J; Hajdu, Cristina; Khaykis, Inessa; Newman, Elliot; Melis, Marcovalerio; Pachter, H Leon; Cohen, Steven M

    2017-03-01

    The management of ectopic pancreas is not well defined. This study aims to determine the prevalence of symptomatic ectopic pancreas and identify those who may benefit from treatment, with a particular focus on robotically assisted surgical management. Our institutional pathology database was queried to identify a cohort of ectopic pancreas specimens. Additional clinical data regarding clinical symptomatology, diagnostic studies, and treatment were obtained through chart review. Nineteen cases of ectopic pancreas were found incidentally during surgery for another condition or found incidentally in a pathologic specimen (65.5%). Eleven patients (37.9%) reported prior symptoms, notably abdominal pain and/or gastrointestinal bleeding. The most common locations for ectopic pancreas were the duodenum and small bowel (31% and 27.6%, respectively). Three out of 29 cases (10.3%) had no symptoms, but had evidence of preneoplastic changes on pathology, while one harbored pancreatic cancer. Over the years, treatment of ectopic pancreas has shifted from open to laparoscopic and more recently to robotic surgery. Our experience is in line with existing evidence supporting surgical treatment of symptomatic or complicated ectopic pancreas. In the current era, minimally invasive and robotic surgery can be used safely and successfully for treatment of ectopic pancreas.

  10. Constraining the SIF - GPP relationship via estimation of NPQ

    Science.gov (United States)

    Silva, C. E.; Yang, X.; Tang, J.; Lee, J. E.; Cushman, K.; Toh Yuan Kun, L.; Kellner, J. R.

    2016-12-01

    Airborne and satellite measurements of solar-induced fluorescence (SIF) have the potential to improve estimates of gross primary production (GPP). Plants dissipate absorbed photosynthetically active radiation (APAR) among three de-excitation pathways: SIF, photochemical quenching (PQ), which results in electron transport and the production of ATP and NADPH consumed during carbon fixation (i.e., GPP), and heat dissipation via conversion of xanthophyll pigments (non-photochemical quenching: NPQ). As a result, the relationship between SIF and GPP is a function of NPQ and may vary temporally and spatially with environmental conditions (e.g., light and water availability) and plant traits (e.g., leaf N content). Accurate estimates of any one of the de-excitation pathways require measurement of the other two. Here we combine half-hourly measurements of canopy APAR and SIF with eddy covariance estimates of GPP at Harvard Forest to close the canopy radiation budget and infer canopy NPQ throughout the 2013 growing season. We use molecular-level photosynthesis equations to compute PQ (umol photons m-2s-1) from GPP (umol CO2 m-2s-1) and invert an integrated canopy radiative transfer and leaf-level photosynthesis/fluorescence model (SCOPE) to quantify hemispherically and spectrally-integrated SIF emission (umol photons m-2s-1) from single band (760 nm) top-of-canopy SIF measurements. We estimate half-hourly NPQ as the residual required to close the radiation budget (NPQ = APAR - SIF - PQ). Our future work will test estimated NPQ against simultaneously acquired measurements of the photochemical reflectance index (PRI), a spectral index sensitive to xanthopyll pigments. By constraining two of the three de-excitation pathways, simultaneous SIF and PRI measurements are likely to improve GPP estimates, which are crucial to the study of climate - carbon cycle interactions.

  11. Intraplate Vertical Land Movements Constrained by Absolute Gravity Measurements

    Science.gov (United States)

    van Camp, M.; Williams, S. D.; Hinzen, K. G.; Camelbeeck, T.

    2007-12-01

    We have conducted repeated absolute gravity (AG) measurements across the tectonically active intraplate regions in Northwest Europe: the Ardenne and the Roer Graben. At most of the stations measurements were undertaken since 2000 and repeated twice a year. Our analysis of these measurements, performed in Belgium and Germany, show that at all stations except Jülich, there is no detectable gravity variation higher than 10 nm s-2 at the 95% confidence level. This is equivalent to vertical movements of 5 mm/yr. Although not yet significant, the observed rates do not contradict the subsidence predicted by glacial isostatic adjustment models and provide an upper limit on the possible uplift of the Ardennes. In Jülich, a gravity rate of change of 36.7 nm s-2/year equivalent to 18.4 mm/yr is due to anthropogenic subsidence. The amplitudes of the seasonal variations range from 18±0.8 nms-2 to 43±29 nms-2, depending on the location. These variations should have a negligible effect on the long-term trend, but at the Membach reference station, were a longer time series is available, differences in the rates observed since 1996 and 1999 indicate that long-term environmental effects may influence the inferred trend. The observed seasonal effects also demonstrate the repeatability of AG measurements. In Ostend, the AG time series agrees with tide gauge data, global mean sea level and altimeter measurements but disagrees with the CGPS. This study indicates that, even in difficult conditions, AG measurements repeated once a year can resolve vertical land movements at a few mm level after 5 years. This also confirms the need to measure for decades, using accurate and stable geodetic techniques like AG, in order to constrain slow deformation processes in an intraplate context.

  12. Vertical Land Movements Constrained by Absolute Gravity Measurements

    Science.gov (United States)

    van Camp, M.; Williams, S. D.; Hinzen, K.; Camelbeeck, T.

    2009-05-01

    Repeated absolute gravity (AG) measurements have been performed across the tectonically active intraplate regions in Northwest Europe: the Ardenne and the Roer Graben. At most of the stations measurements were undertaken in 2000 and repeated twice a year. Analysis of these measurements, performed in Belgium and Germany, show that at all stations except Jülich, there is no detectable gravity variation higher than 10 nm s-2 at the 95% confidence level. This is equivalent to vertical movements of 5 mm/yr. Although not yet significant, the observed rates do not contradict the subsidence predicted by glacial isostatic adjustment models and provide an upper limit on the possible uplift of the Ardennes. In Jülich, a gravity rate of change of 36 nm -2/year, equivalent to 18 mm/yr, is at least in parts due to anthropogenic subsidence. The amplitudes of the seasonal variations range from 18±0.8 nm s-2 to 43±29 nm s-2, depending on the location. These variations should have a negligible effect on the long-term trend, but at the Membach reference station, were a longer time series is available, differences in the rates observed since 1996 and 1999 indicate that long-term environmental effects may influence the inferred trend. The observed seasonal effects also demonstrate the repeatability of AG measurements. This study indicates that, even in difficult conditions, AG measurements repeated once a year can resolve vertical land movements at a few mm level after 5 years. This also confirms the need to measure for decades, using accurate and stable geodetic techniques like AG, in order to constrain slow deformation processes.

  13. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  14. Constrained Gauge Fields from Spontaneous Lorentz Violation

    CERN Document Server

    Chkareuli, J L; Jejelava, J G; Nielsen, H B

    2008-01-01

    Spontaneous Lorentz violation realized through a nonlinear vector field constraint of the type $A_{\\mu}^{2}=M^{2}$ ($M$ is the proposed scale for Lorentz violation) is shown to generate massless vector Goldstone bosons, gauging the starting global internal symmetries in arbitrary relativistically invariant theories. The gauge invariance appears in essence as a necessary condition for these bosons not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in a theory is chosen by the Lorentz violation. In the Abelian symmetry case the only possible theory proves to be QED with a massless vector Goldstone boson naturally associated with the photon, while the non-Abelian symmetry case results in a conventional Yang-Mills theory. These theories, both Abelian and non-Abelian, look essentially nonlinear and contain particular Lorentz (and $CPT$) violating couplings when expressed in terms of the pure Goldstone vector modes. However, they do not lead to physical ...

  15. Constrained gauge fields from spontaneous Lorentz violation

    DEFF Research Database (Denmark)

    Chkareuli, J. L.; Froggatt, C. D.; Jejelava, J. G.

    2008-01-01

    Spontaneous Lorentz violation realized through a nonlinear vector field constraint of the type AµAµ=M2 (M is the proposed scale for Lorentz violation) is shown to generate massless vector Goldstone bosons, gauging the starting global internal symmetries in arbitrary relativistically invariant...... theories. The gauge invariance appears in essence as a necessary condition for these bosons not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in a theory is chosen by the Lorentz violation. In the Abelian symmetry case the only possible theory...... couplings when expressed in terms of the pure Goldstone vector modes. However, they do not lead to physical Lorentz violation due to the simultaneously generated gauge invariance. Udgivelsesdato: June 11...

  16. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  17. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  19. Constrained Quadratic Programming and Neurodynamics-Based Solver for Energy Optimization of Biped Walking Robots

    Directory of Open Access Journals (Sweden)

    Liyang Wang

    2017-01-01

    Full Text Available The application of biped robots is always trapped by their high energy consumption. This paper makes a contribution by optimizing the joint torques to decrease the energy consumption without changing the biped gaits. In this work, a constrained quadratic programming (QP problem for energy optimization is formulated. A neurodynamics-based solver is presented to solve the QP problem. Differing from the existing literatures, the proposed neurodynamics-based energy optimization (NEO strategy minimizes the energy consumption and guarantees the following three important constraints simultaneously: (i the force-moment equilibrium equation of biped robots, (ii frictions applied by each leg on the ground to hold the biped robot without slippage and tipping over, and (iii physical limits of the motors. Simulations demonstrate that the proposed strategy is effective for energy-efficient biped walking.

  20. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys