WorldWideScience

Sample records for conditional constrained minimization

  1. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  2. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  3. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  4. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  5. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  6. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  7. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  8. Exploring the Metabolic and Perceptual Correlates of Self-Selected Walking Speed under Constrained and Un-Constrained Conditions

    Directory of Open Access Journals (Sweden)

    David T Godsiff, Shelly Coe, Charlotte Elsworth-Edelsten, Johnny Collett, Ken Howells, Martyn Morris, Helen Dawes

    2018-03-01

    Full Text Available Mechanisms underpinning self-selected walking speed (SSWS are poorly understood. The present study investigated the extent to which SSWS is related to metabolism, energy cost, and/or perceptual parameters during both normal and artificially constrained walking. Fourteen participants with no pathology affecting gait were tested under standard conditions. Subjects walked on a motorized treadmill at speeds derived from their SSWS as a continuous protocol. RPE scores (CR10 and expired air to calculate energy cost (J.kg-1.m-1 and carbohydrate (CHO oxidation rate (J.kg-1.min-1 were collected during minutes 3-4 at each speed. Eight individuals were re-tested under the same conditions within one week with a hip and knee-brace to immobilize their right leg. Deflection in RPE scores (CR10 and CHO oxidation rate (J.kg-1.min-1 were not related to SSWS (five and three people had deflections in the defined range of SSWS in constrained and unconstrained conditions, respectively (p > 0.05. Constrained walking elicited a higher energy cost (J.kg-1.m-1 and slower SSWS (p 0.05. SSWS did not occur at a minimum energy cost (J.kg-1.m-1 in either condition, however, the size of the minimum energy cost to SSWS disparity was the same (Froude {Fr} = 0.09 in both conditions (p = 0.36. Perceptions of exertion can modify walking patterns and therefore SSWS and metabolism/ energy cost are not directly related. Strategies which minimize perceived exertion may enable faster walking in people with altered gait as our findings indicate they should self-optimize to the same extent under different conditions.

  9. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  10. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem

    2014-04-18

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.

  11. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  12. A Simply Constrained Optimization Reformulation of KKT Systems Arising from Variational Inequalities

    International Nuclear Information System (INIS)

    Facchinei, F.; Fischer, A.; Kanzow, C.; Peng, J.-M.

    1999-01-01

    The Karush-Kuhn-Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties

  13. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  14. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  15. Design of a minimally constraining, passively supported gait training exoskeleton: ALEX II.

    Science.gov (United States)

    Winfree, Kyle N; Stegall, Paul; Agrawal, Sunil K

    2011-01-01

    This paper discusses the design of a new, minimally constraining, passively supported gait training exoskeleton known as ALEX II. This device builds on the success and extends the features of the ALEX I device developed at the University of Delaware. Both ALEX (Active Leg EXoskeleton) devices have been designed to supply a controllable torque to a subject's hip and knee joint. The current control strategy makes use of an assist-as-needed algorithm. Following a brief review of previous work motivating this redesign, we discuss the key mechanical features of the new ALEX device. A short investigation was conducted to evaluate the effectiveness of the control strategy and impact of the exoskeleton on the gait of six healthy subjects. This paper concludes with a comparison between the subjects' gait both in and out of the exoskeleton. © 2011 IEEE

  16. Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2012-01-01

    Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.

  17. Nonlinear Chance Constrained Problems: Optimality Conditions, Regularization and Solvers

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Branda, Martin

    2016-01-01

    Roč. 170, č. 2 (2016), s. 419-436 ISSN 0022-3239 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Chance constrained programming * Optimality conditions * Regularization * Algorithms * Free MATLAB codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.289, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0460909.pdf

  18. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  19. Uniqueness conditions for constrained three-way factor decompositions with linearly dependent loadings

    NARCIS (Netherlands)

    Stegeman, Alwin; De Almeida, Andre L. F.

    2009-01-01

    In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that

  20. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  1. Minimization under entropy conditions, with applications in lower bound problems

    International Nuclear Information System (INIS)

    Toft, Joachim

    2004-01-01

    We minimize the functional f->∫ afdμ under the entropy condition E(f)=-∫ f log fdμ≥E, ∫ fdμ=1 and f≥0, where E is a member of R is fixed. We prove that the minimum is attained for f=e -sa /∫ e -sa dμ, where s is a member of R is chosen such that E(f)=E. We apply the result on minimizing problems in pseudodifferential calculus, where we minimize the harmonic oscillator

  2. Topology Optimization for Minimizing the Resonant Response of Plates with Constrained Layer Damping Treatment

    Directory of Open Access Journals (Sweden)

    Zhanpeng Fang

    2015-01-01

    Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.

  3. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  4. Varietal improvement of irrigated rice under minimal water conditions

    International Nuclear Information System (INIS)

    Abdul Rahim Harun; Marziah Mahmood; Sobri Hussein

    2010-01-01

    Varietal improvement of irrigated rice under minimal water condition is a research project under Program Research of Sustainable Production of High Yielding Irrigated Rice under Minimal Water Input (IRPA- 01-01-03-0000/ PR0068/ 0504). Several agencies were involved in this project such as Malaysian Nuclear Agency (MNA), Malaysian Agricultural Research and Development Institute (MARDI), Universiti Putra Malaysia (UPM) and Ministry of Agriculture (MOA). The project started in early 2004 with approved IRPA fund of RM 275,000.00 for 3 years. The main objective of the project is to generate superior genotypes for minimal water requirement through induced mutation techniques. A cultivated rice Oryza sativa cv MR219 treated with gamma radiation at 300 and 400 Gray were used in the experiment. Two hundred gm M2 seeds from each dose were screened under minimal water stress in greenhouse at Mardi Seberang Perai. Five hundred panicles with good filled grains were selected for paddy field screening with simulate precise water stress regime. Thirty eight potential lines with required adaptive traits were selected in M3. After several series of selection, 12 promising mutant line were observed tolerance to minimal water stress where two promising mutant lines designated as MR219-4 and MR219-9 were selected for further testing under several stress environments. (author)

  5. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    International Nuclear Information System (INIS)

    Choudhury, Sayantan

    2014-01-01

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f NL , τ NL and g NL for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A CMB , using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c s ≤1, and tensor to scalar, 10 −22 ≤r ⋆ ≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f NL ≤8.5, O(75−150)≤τ NL ≤2800 and O(17.4−34.7)≤g NL ≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A CMB ≤0.09

  6. Constraining N=1 supergravity inflation with non-minimal Kähler operators using δN formalism

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sayantan [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108 (India)

    2014-04-15

    In this paper I provide a general framework based on δN formalism to study the features of unavoidable higher dimensional non-renormalizable Kähler operators for N=1 supergravity (SUGRA) during primordial inflation from the combined constraint on non-Gaussianity, sound speed and CMB dipolar asymmetry as obtained from the recent Planck data. In particular I study the nonlinear evolution of cosmological perturbations on large scales which enables us to compute the curvature perturbation, ζ, without solving the exact perturbed field equations. Further I compute the non-Gaussian parameters f{sub NL} , τ{sub NL} and g{sub NL} for local type of non-Gaussianities and CMB dipolar asymmetry parameter, A{sub CMB}, using the δN formalism for a generic class of sub-Planckian models induced by the Hubble-induced corrections for a minimal supersymmetric D-flat direction where inflation occurs at the point of inflection within the visible sector. Hence by using multi parameter scan I constrain the non-minimal couplings appearing in non-renormalizable Kähler operators within, O(1), for the speed of sound, 0.02≤c{sub s}≤1, and tensor to scalar, 10{sup −22}≤r{sub ⋆}≤0.12. Finally applying all of these constraints I will fix the lower as well as the upper bound of the non-Gaussian parameters within, O(1−5)≤f{sub NL}≤8.5, O(75−150)≤τ{sub NL}≤2800 and O(17.4−34.7)≤g{sub NL}≤648.2, and CMB dipolar asymmetry parameter within the range, 0.05≤A{sub CMB}≤0.09.

  7. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  8. Constraining non-minimally coupled tachyon fields by the Noether symmetry

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2009-01-01

    A model for a homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. The Noether symmetry is used to find expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is responsible for the decelerated-accelerated transition period.

  9. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  10. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  11. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  12. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    Science.gov (United States)

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  13. Storage of RF photons in minimal conditions

    Science.gov (United States)

    Cromières, J.-P.; Chanelière, T.

    2018-02-01

    We investigate the minimal conditions to store coherently a RF pulse in a material medium. We choose a commercial quartz as a memory support because it is a widely available component with a high Q-factor. Pulse storage is obtained by varying dynamically the light-matter coupling with an analog switch. This parametric driving of the quartz dynamics can be alternatively interpreted as a stopped-light experiment. We obtain an efficiency of 26%, a storage time of 209 μs and a time-to-bandwidth product of 98 by optimizing the pulse temporal shape. The coherent character of the storage is demonstrated. Our goal is to connect different types of memories in the RF and optical domain for quantum information processing. Our motivation is essentially fundamental.

  14. Minimizing the ILL-conditioning in the analysis by gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Halisson Alberdan C.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Moreira, Icaro Valgueiro M., E-mail: hacc@cin.ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: rmas@cin.ufpe.br, E-mail: ivmm@cin.ufpe.br, E-mail: ccd@ufpe.br, E-mail: eal@cin.ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker, E-mail: lker.Meric@ift.uib.no [University Of Bergen (Norway)

    2015-07-01

    A non-invasive method which can be employed for elemental analysis is the Prompt-Gamma Neutron Activation Analysis. The aim is to estimate the mass fractions of the different constituent elements present in the unknown sample basing its estimations on the energies of all the photopeaks in their spectra. Two difficulties arise in this approach: the constituents are unknown, and the composed spectrum of the unknown sample is a nonlinear combination of the spectra of its constituents (which are called libraries). An iterative method that has become popular is the Monte Carlo Library Least Squares. One limitation with this method is that the amount of noise present in the spectra is not negligible, and the magnitude differences in the photon counting produce a bad conditioning in the covariance matrix employed by the least squares method, affecting the numerical stability of the method. A method for minimizing the numerical instability provoked by noisy spectra is proposed. Corresponding parts of different spectra are selected as to minimize the condition number of the resulting covariance matrix. This idea is supported by the assumption that the unknown spectrum is a linear combination of its constituent's spectra, and the fact that the amount of constituents is so small (typically ve of them). The selection of spectrum parts is done through Greedy Randomized Adaptive Search Procedures, where the cost function is the condition number that derives from the covariance matrix produced out of the selected parts. A QR factorization is also applied to the nal covariance matrix to reduce further its condition number, and transferring part of its bad conditioning to the basis conversion matrix. (author)

  15. Minimizing the ILL-conditioning in the analysis by gamma radiation

    International Nuclear Information System (INIS)

    Cardoso, Halisson Alberdan C.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Moreira, Icaro Valgueiro M.; Meric, Ilker

    2015-01-01

    A non-invasive method which can be employed for elemental analysis is the Prompt-Gamma Neutron Activation Analysis. The aim is to estimate the mass fractions of the different constituent elements present in the unknown sample basing its estimations on the energies of all the photopeaks in their spectra. Two difficulties arise in this approach: the constituents are unknown, and the composed spectrum of the unknown sample is a nonlinear combination of the spectra of its constituents (which are called libraries). An iterative method that has become popular is the Monte Carlo Library Least Squares. One limitation with this method is that the amount of noise present in the spectra is not negligible, and the magnitude differences in the photon counting produce a bad conditioning in the covariance matrix employed by the least squares method, affecting the numerical stability of the method. A method for minimizing the numerical instability provoked by noisy spectra is proposed. Corresponding parts of different spectra are selected as to minimize the condition number of the resulting covariance matrix. This idea is supported by the assumption that the unknown spectrum is a linear combination of its constituent's spectra, and the fact that the amount of constituents is so small (typically ve of them). The selection of spectrum parts is done through Greedy Randomized Adaptive Search Procedures, where the cost function is the condition number that derives from the covariance matrix produced out of the selected parts. A QR factorization is also applied to the nal covariance matrix to reduce further its condition number, and transferring part of its bad conditioning to the basis conversion matrix. (author)

  16. Strain development in a filled epoxy resin curing under constrained and unconstrained conditions as assessed by Fibre Bragg Grating sensors

    Directory of Open Access Journals (Sweden)

    2007-04-01

    Full Text Available The influence of adhesion to the mould wall on the released strain of a highly filled anhydride cured epoxy resin (EP, which was hardened in an aluminium mould under constrained and unconstrained condition, was investigated. The shrinkage-induced strain was measured by fibre optical sensing technique. Fibre Bragg Grating (FBG sensors were embedded into the curing EP placed in a cylindrical mould cavity. The cure-induced strain signals were detected in both, vertical and horizontal directions, during isothermal curing at 75 °C for 1000 minutes. A huge difference in the strain signal of both directions could be detected for the different adhesion conditions. Under non-adhering condition the horizontal and vertical strain-time traces were practically identical resulting in a compressive strain at the end of about 3200 ppm, which is a proof of free or isotropic shrinking. However, under constrained condition the horizontal shrinkage in the EP was prevented due to its adhesion to the mould wall. So, the curing material shrunk preferably in vertical direction. This resulted in much higher released compressive strain signals in vertical (10430 ppm than in horizontal (2230 ppm direction. The constrained cured EP resins are under inner stresses. Qualitative information on the residual stress state in the molding was deduced by exploiting the birefringence of the EP.

  17. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  18. Analyses of an air conditioning system with entropy generation minimization and entransy theory

    International Nuclear Information System (INIS)

    Wu Yan-Qiu; Cai Li; Wu Hong-Juan

    2016-01-01

    In this paper, based on the generalized heat transfer law, an air conditioning system is analyzed with the entropy generation minimization and the entransy theory. Taking the coefficient of performance (denoted as COP ) and heat flow rate Q out which is released into the room as the optimization objectives, we discuss the applicabilities of the entropy generation minimization and entransy theory to the optimizations. Five numerical cases are presented. Combining the numerical results and theoretical analyses, we can conclude that the optimization applicabilities of the two theories are conditional. If Q out is the optimization objective, larger entransy increase rate always leads to larger Q out , while smaller entropy generation rate does not. If we take COP as the optimization objective, neither the entropy generation minimization nor the concept of entransy increase is always applicable. Furthermore, we find that the concept of entransy dissipation is not applicable for the discussed cases. (paper)

  19. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    Science.gov (United States)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  20. Using finite element modelling to examine the flow process and temperature evolution in HPT under different constraining conditions

    International Nuclear Information System (INIS)

    Pereira, P H R; Langdon, T G; Figueiredo, R B; Cetlin, P R

    2014-01-01

    High-pressure torsion (HPT) is a metal-working technique used to impose severe plastic deformation into disc-shaped samples under high hydrostatic pressures. Different HPT facilities have been developed and they may be divided into three distinct categories depending upon the configuration of the anvils and the restriction imposed on the lateral flow of the samples. In the present paper, finite element simulations were performed to compare the flow process, temperature, strain and hydrostatic stress distributions under unconstrained, quasi-constrained and constrained conditions. It is shown there are distinct strain distributions in the samples depending on the facility configurations and a similar trend in the temperature rise of the HPT workpieces

  1. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  2. Minimization for conditional simulation: Relationship to optimal transport

    Science.gov (United States)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  3. Minimally allowed neutrinoless double beta decay rates within an anarchical framework

    International Nuclear Information System (INIS)

    Jenkins, James

    2009-01-01

    Neutrinoless double beta decay (ββ0ν) is the only realistic probe of the Majorana nature of the neutrino. In the standard picture, its rate is proportional to m ee , the e-e element of the Majorana neutrino mass matrix in the flavor basis. I explore minimally allowed m ee values within the framework of mass matrix anarchy where neutrino parameters are defined statistically at low energies. Distributions of mixing angles are well defined by the Haar integration measure, but masses are dependent on arbitrary weighting functions and boundary conditions. I survey the integration measure parameter space and find that for sufficiently convergent weightings, m ee is constrained between (0.01-0.4) eV at 90% confidence. Constraints from neutrino mixing data lower these bounds. Singular integration measures allow for arbitrarily small m ee values with the remaining elements ill-defined, but this condition constrains the flavor structure of the model's ultraviolet completion. ββ0ν bounds below m ee ∼5x10 -3 eV should indicate symmetry in the lepton sector, new light degrees of freedom, or the Dirac nature of the neutrino.

  4. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges

  5. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  6. Minimal modification to tribimaximal mixing

    International Nuclear Information System (INIS)

    He Xiaogang; Zee, A.

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

  7. Metal artifact reduction in x-ray computed tomography (CT) by constrained optimization

    International Nuclear Information System (INIS)

    Zhang Xiaomeng; Wang Jing; Xing Lei

    2011-01-01

    Purpose: The streak artifacts caused by metal implants have long been recognized as a problem that limits various applications of CT imaging. In this work, the authors propose an iterative metal artifact reduction algorithm based on constrained optimization. Methods: After the shape and location of metal objects in the image domain is determined automatically by the binary metal identification algorithm and the segmentation of ''metal shadows'' in projection domain is done, constrained optimization is used for image reconstruction. It minimizes a predefined function that reflects a priori knowledge of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available metal-shadow-excluded projection data, with image non-negativity enforced. The minimization problem is solved through the alternation of projection-onto-convex-sets and the steepest gradient descent of the objective function. The constrained optimization algorithm is evaluated with a penalized smoothness objective. Results: The study shows that the proposed method is capable of significantly reducing metal artifacts, suppressing noise, and improving soft-tissue visibility. It outperforms the FBP-type methods and ART and EM methods and yields artifacts-free images. Conclusions: Constrained optimization is an effective way to deal with CT reconstruction with embedded metal objects. Although the method is presented in the context of metal artifacts, it is applicable to general ''missing data'' image reconstruction problems.

  8. Minimal conditions for the existence of a Hawking-like flux

    International Nuclear Information System (INIS)

    Barcelo, Carlos; Liberati, Stefano; Sonego, Sebastiano; Visser, Matt

    2011-01-01

    We investigate the minimal conditions that an asymptotically flat general relativistic spacetime must satisfy in order for a Hawking-like Planckian flux of particles to arrive at future null infinity. We demonstrate that there is no requirement that any sort of horizon form anywhere in the spacetime. We find that the irreducible core requirement is encoded in an approximately exponential 'peeling' relationship between affine coordinates on past and future null infinity. As long as a suitable adiabaticity condition holds, then a Planck-distributed Hawking-like flux will arrive at future null infinity with temperature determined by the e-folding properties of the outgoing null geodesics. The temperature of the Hawking-like flux can slowly evolve as a function of time. We also show that the notion of peeling of null geodesics is distinct from the usual notion of 'inaffinity' used in Hawking's definition of surface gravity.

  9. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  10. HIFU scattering by the ribs: constrained optimisation with a complex surface impedance boundary condition

    Science.gov (United States)

    Gélat, P.; ter Haar, G.; Saffari, N.

    2014-04-01

    High intensity focused ultrasound (HIFU) enables highly localised, non-invasive tissue ablation and its efficacy has been demonstrated in the treatment of a range of cancers, including those of the kidney, prostate and breast. HIFU offers the ability to treat deep-seated tumours locally, and potentially bears fewer side effects than more established treatment modalities such as resection, chemotherapy and ionising radiation. There remains however a number of significant challenges which currently hinder its widespread clinical application. One of these challenges is the need to transmit sufficient energy through the ribcage to ablate tissue at the required foci whilst minimising the formation of side lobes and sparing healthy tissue. Ribs both absorb and reflect ultrasound strongly. This sometimes results in overheating of bone and overlying tissue during treatment, leading to skin burns. Successful treatment of a patient with tumours in the upper abdomen therefore requires a thorough understanding of the way acoustic and thermal energy is deposited. Previously, a boundary element (BE) approach based on a Generalised Minimal Residual (GMRES) implementation of the Burton-Miller formulation was developed to predict the field of a multi-element HIFU array scattered by human ribs, the topology of which was obtained from CT scan data [1]. Dissipative mechanisms inside the propagating medium have since been implemented, together with a complex surface impedance condition at the surface of the ribs. A reformulation of the boundary element equations as a constrained optimisation problem was carried out to determine the complex surface velocities of a multi-element HIFU array which generated the acoustic pressure field that best fitted a required acoustic pressure distribution in a least-squares sense. This was done whilst ensuring that an acoustic dose rate parameter at the surface of the ribs was kept below a specified threshold. The methodology was tested at an

  11. Wormholes minimally violating the null energy condition

    Energy Technology Data Exchange (ETDEWEB)

    Bouhmadi-López, Mariam [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal); Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt [Centro de Astronomia e Astrofísica da Universidade de Lisboa, Campo Grande, Edifício C8, 1749-016 Lisboa (Portugal)

    2014-11-01

    We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.

  12. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  13. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  14. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  15. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  16. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  17. Constraining reconnection region conditions using imaging and spectroscopic analysis of a coronal jet

    Science.gov (United States)

    Brannon, Sean; Kankelborg, Charles

    2017-08-01

    Coronal jets typically appear as thin, collimated structures in EUV and X-ray wavelengths, and are understood to be initiated by magnetic reconnection in the lower corona or upper chromosphere. Plasma that is heated and accelerated upward into coronal jets may therefore carry indirect information on conditions in the reconnection region and current sheet located at the jet base. On 2017 October 14, the Interface Region Imaging Spectrograph (IRIS) and Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) observed a series of jet eruptions originating from NOAA AR 12599. The jet structure has a length-to-width ratio that exceeds 50, and remains remarkably straight throughout its evolution. Several times during the observation bright blobs of plasma are seen to erupt upward, ascending and subsequently descending along the structure. These blobs are cotemporal with footpoint and arcade brightenings, which we believe indicates multiple episodes of reconnection at the structure base. Through imaging and spectroscopic analysis of jet and footpoint plasma we determine a number of properties, including the line-of-sight inclination, the temperature and density structure, and lift-off velocities and accelerations of jet eruptions. We use these properties to constrain the geometry of the jet structure and conditions in reconnection region.

  18. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  19. Optimizing cutting conditions on sustainable machining of aluminum alloy to minimize power consumption

    Science.gov (United States)

    Nur, Rusdi; Suyuti, Muhammad Arsyad; Susanto, Tri Agus

    2017-06-01

    Aluminum is widely utilized in the industrial sector. There are several advantages of aluminum, i.e. good flexibility and formability, high corrosion resistance and electrical conductivity, and high heat. Despite of these characteristics, however, pure aluminum is rarely used because of its lacks of strength. Thus, most of the aluminum used in the industrial sectors was in the form of alloy form. Sustainable machining can be considered to link with the transformation of input materials and energy/power demand into finished goods. Machining processes are responsible for environmental effects accepting to their power consumption. The cutting conditions have been optimized to minimize the cutting power, which is the power consumed for cutting. This paper presents an experimental study of sustainable machining of Al-11%Si base alloy that was operated without any cooling system to assess the capacity in reducing power consumption. The cutting force was measured and the cutting power was calculated. Both of cutting force and cutting power were analyzed and modeled by using the central composite design (CCD). The result of this study indicated that the cutting speed has an effect on machining performance and that optimum cutting conditions have to be determined, while sustainable machining can be followed in terms of minimizing power consumption and cutting force. The model developed from this study can be used for evaluation process and optimization to determine optimal cutting conditions for the performance of the whole process.

  20. Multivariable controller for discrete stochastic amplitude-constrained systems

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1983-04-01

    Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.

  1. Bulk diffusion in a kinetically constrained lattice gas

    Science.gov (United States)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  2. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  3. Influence of boundary conditions on the existence and stability of minimal surfaces of revolution made of soap films

    Science.gov (United States)

    Salkin, Louis; Schmit, Alexandre; Panizza, Pascal; Courbin, Laurent

    2014-09-01

    Because of surface tension, soap films seek the shape that minimizes their surface energy and thus their surface area. This mathematical postulate allows one to predict the existence and stability of simple minimal surfaces. After briefly recalling classical results obtained in the case of symmetric catenoids that span two circular rings with the same radius, we discuss the role of boundary conditions on such shapes, working with two rings having different radii. We then investigate the conditions of existence and stability of other shapes that include two portions of catenoids connected by a planar soap film and half-symmetric catenoids for which we introduce a method of observation. We report a variety of experimental results including metastability—an hysteretic evolution of the shape taken by a soap film—explained using simple physical arguments. Working by analogy with the theory of phase transitions, we conclude by discussing universal behaviors of the studied minimal surfaces in the vicinity of their existence thresholds.

  4. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  5. Wronskian type solutions for the vector k-constrained KP hierarchy

    International Nuclear Information System (INIS)

    Zhang Youjin.

    1995-07-01

    Motivated by a relation of the 1-constrained Kadomtsev-Petviashvili (KP) hierarchy with the 2 component KP hierarchy, the tau-conditions of the vector k-constrained KP hierarchy are constructed by using an analogue of the Baker-Akhiezer (m + 1)-point function. These tau functions are expressed in terms of Wronskian type determinants. (author). 20 refs

  6. The effect of agency budgets on minimizing greenhouse gas emissions from road rehabilitation policies

    International Nuclear Information System (INIS)

    Reger, Darren; Madanat, Samer; Horvath, Arpad

    2015-01-01

    Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets. (letter)

  7. Off-wall boundary conditions for turbulent flows obtained from buffer-layer minimal flow units

    Science.gov (United States)

    Garcia-Mayoral, Ricardo; Pierce, Brian; Wallace, James

    2012-11-01

    There is strong evidence that the transport processes in the buffer region of wall-bounded turbulence are common across various flow configurations, even in the embryonic turbulence in transition (Park et al., Phys. Fl. 24). We use this premise to develop off-wall boundary conditions for turbulent simulations. Boundary conditions are constructed from DNS databases using periodic minimal flow units and reduced order modeling. The DNS data was taken from a channel at Reτ = 400 and a zero-pressure gradient transitional boundary layer (Sayadi et al., submitted to J . FluidMech .) . Both types of boundary conditions were first tested on a DNS of the core of the channel flow with the aim of extending their application to LES and to spatially evolving flows. 2012 CTR Summer Program.

  8. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  9. Stringent tests of constrained Minimal Flavor Violation through ΔF=2 transitions

    International Nuclear Information System (INIS)

    Buras, Andrzej J.; Girrbach, Jennifer

    2013-01-01

    New Physics contributions to ΔF=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S 0 (x t ). With already very precise experimental values of ε K , ΔM d , ΔM s and precise values of the CP-asymmetry S ψK S and of B K entering the evaluation of ε K , the future of CMFV in the ΔF = 2 sector depends crucially on the values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, γ, F B s √(B B s ) and F B d √(B B d ). The ratio ξ of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with ΔM s / ΔM d and S ψK S to determine the range of the angle γ in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke ε K vertical stroke and ΔM d allows to determine the favorite CMFV values of vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke, F B s √(B B s ) and F B d √(B B d ) as functions of S(v) and γ. The vertical stroke V cb vertical stroke 4 dependence of ε K allows to determine vertical stroke V cb vertical stroke for a given S(v) and γ with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V ub vertical stroke, vertical stroke V td vertical stroke and vertical stroke V ts vertical stroke that are automatically determined as functions of S(v) and γ. We derive correlations between F B s √(B B s ) and F B d √(B B d ), vertical stroke V cb vertical stroke, vertical stroke V ub vertical stroke and γ that should be tested in the coming years. Typically F B s √(B B s ) and F B d √(B B d ) have to be lower than their present lattice values, while vertical stroke V cb vertical stroke has to

  10. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  11. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  12. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  13. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  14. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization

    International Nuclear Information System (INIS)

    Sidky, Emil Y; Pan Xiaochuan

    2008-01-01

    An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories

  15. Security-Constrained Unit Commitment in AC Microgrids Considering Stochastic Price-Based Demand Response and Renewable Generation

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    In this paper, a stochastic model for scheduling of AC security‐constrained unit commitment associated with demand response (DR) actions is developed in an islanded residential microgrid. The proposed model maximizes the expected profit of microgrid operator and minimizes the total customers...

  16. Deformed statistics Kullback–Leibler divergence minimization within a scaled Bregman framework

    International Nuclear Information System (INIS)

    Venkatesan, R.C.; Plastino, A.

    2011-01-01

    The generalized Kullback–Leibler divergence (K–Ld) in Tsallis statistics [constrained by the additive duality of generalized statistics (dual generalized K–Ld)] is here reconciled with the theory of Bregman divergences for expectations defined by normal averages, within a measure-theoretic framework. Specifically, it is demonstrated that the dual generalized K–Ld is a scaled Bregman divergence. The Pythagorean theorem is derived from the minimum discrimination information principle using the dual generalized K–Ld as the measure of uncertainty, with constraints defined by normal averages. The minimization of the dual generalized K–Ld, with normal averages constraints, is shown to exhibit distinctly unique features. -- Highlights: ► Dual generalized Kullback–Leibler divergence (K–Ld) proven to be scaled Bregman divergence in continuous measure-theoretic framework. ► Minimum dual generalized K–Ld condition established with normal averages constraints. ► Pythagorean theorem derived.

  17. Resource Constrained Project Scheduling Subject to Due Dates: Preemption Permitted with Penalty

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available Extensive research works have been carried out in resource constrained project scheduling problem. However, scarce researches have studied the problems in which a setup cost must be incurred if activities are preempted. In this research, we investigate the resource constrained project scheduling problem to minimize the total project cost, considering earliness-tardiness and preemption penalties. A mixed integer programming formulation is proposed for the problem. The resulting problem is NP-hard. So, we try to obtain a satisfying solution using simulated annealing (SA algorithm. The efficiency of the proposed algorithm is tested based on 150 randomly produced examples. Statistical comparison in terms of the computational times and objective function indicates that the proposed algorithm is efficient and effective.

  18. Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation

    International Nuclear Information System (INIS)

    Terze, Zdravko; Lefeber, Dirk; Muftic, Osman

    2001-01-01

    A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data

  19. Minimally Disruptive Medicine: A Pragmatically Comprehensive Model for Delivering Care to Patients with Multiple Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Aaron L. Leppin

    2015-01-01

    Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.

  20. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  1. Stringent tests of constrained Minimal Flavor Violation through {Delta}F=2 transitions

    Energy Technology Data Exchange (ETDEWEB)

    Buras, Andrzej J. [TUM-IAS, Garching (Germany); Girrbach, Jennifer [TUM, Physik Department, Garching (Germany)

    2013-09-15

    New Physics contributions to {Delta}F=2 transitions in the simplest extensions of the Standard Model (SM), the models with constrained Minimal Flavor Violation (CMFV), are parametrized by a single variable S(v), the value of the real box diagram function that in CMFV is bounded from below by its SM value S{sub 0}(x{sub t}). With already very precise experimental values of {epsilon}{sub K}, {Delta}M{sub d}, {Delta}M{sub s} and precise values of the CP-asymmetry S{sub {psi}K{sub S}} and of B{sub K} entering the evaluation of {epsilon}{sub K}, the future of CMFV in the {Delta}F = 2 sector depends crucially on the values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, {gamma}, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}). The ratio {xi} of the latter two non-perturbative parameters, already rather precisely determined from lattice calculations, allows then together with {Delta}M{sub s} / {Delta}M{sub d} and S{sub {psi}K{sub S}} to determine the range of the angle {gamma} in the unitarity triangle independently of the value of S(v). Imposing in addition the constraints from vertical stroke {epsilon}{sub K} vertical stroke and {Delta}M{sub d} allows to determine the favorite CMFV values of vertical stroke V{sub cb} vertical stroke, vertical stroke V{sub ub} vertical stroke, F{sub B{sub s}} {radical}(B{sub B{sub s}}) and F{sub B{sub d}} {radical}(B{sub B{sub d}}) as functions of S(v) and {gamma}. The vertical stroke V{sub cb} vertical stroke {sup 4} dependence of {epsilon}{sub K} allows to determine vertical stroke V{sub cb} vertical stroke for a given S(v) and {gamma} with a higher precision than it is presently possible using tree-level decays. The same applies to vertical stroke V{sub ub} vertical stroke, vertical stroke V{sub td} vertical stroke and vertical stroke V{sub ts} vertical stroke that are automatically determined as functions of S(v) and {gamma}. We derive correlations

  2. A new approach to the inverse kinematics of a multi-joint robot manipulator using a minimization method

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This paper proposes a new approach to solve the inverse kinematics of a type of sixlink manipulator. Directing our attention to features of joint structures of the manipulator, the original problem is first formulated by a system of equations with four variables and solved by means of a minimization technique. The remaining two variables are determined from constrained conditions involved. This is the basic idea in the present approach. The results of computer simulation of the present algorithm showed that the accuracies of solutions and convergence speed are much higher and quite satisfactory for practical purposes, as compared with the linearization-iteration method based on the conventional inverse Jacobian matrix. (author)

  3. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  4. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  5. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  6. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

    Directory of Open Access Journals (Sweden)

    Stephan Zentner

    2014-02-01

    Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

  7. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    Science.gov (United States)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  8. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  9. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  10. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  11. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  12. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  13. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  14. Constraining the dark side with observations

    International Nuclear Information System (INIS)

    Diez-Tejedor, Alberto

    2007-01-01

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations

  15. Constraining the dark side with observations

    Energy Technology Data Exchange (ETDEWEB)

    Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)

    2007-05-15

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.

  16. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  17. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  18. Constrained reaction volume approach for studying chemical kinetics behind reflected shock waves

    KAUST Repository

    Hanson, Ronald K.

    2013-09-01

    We report a constrained-reaction-volume strategy for conducting kinetics experiments behind reflected shock waves, achieved in the present work by staged filling in a shock tube. Using hydrogen-oxygen ignition experiments as an example, we demonstrate that this strategy eliminates the possibility of non-localized (remote) ignition in shock tubes. Furthermore, we show that this same strategy can also effectively eliminate or minimize pressure changes due to combustion heat release, thereby enabling quantitative modeling of the kinetics throughout the combustion event using a simple assumption of specified pressure and enthalpy. We measure temperature and OH radical time-histories during ethylene-oxygen combustion behind reflected shock waves in a constrained reaction volume and verify that the results can be accurately modeled using a detailed mechanism and a specified pressure and enthalpy constraint. © 2013 The Combustion Institute.

  19. Conditional long-term survival following minimally invasive robotic mitral valve repair: a health services perspective.

    Science.gov (United States)

    Efird, Jimmy T; Griffin, William F; Gudimella, Preeti; O'Neal, Wesley T; Davies, Stephen W; Crane, Patricia B; Anderson, Ethan J; Kindell, Linda C; Landrine, Hope; O'Neal, Jason B; Alwair, Hazaim; Kypson, Alan P; Nifong, Wiley L; Chitwood, W Randolph

    2015-09-01

    Conditional survival is defined as the probability of surviving an additional number of years beyond that already survived. The aim of this study was to compute conditional survival in patients who received a robotically assisted, minimally invasive mitral valve repair procedure (RMVP). Patients who received RMVP with annuloplasty band from May 2000 through April 2011 were included. A 5- and 10-year conditional survival model was computed using a multivariable product-limit method. Non-smoking men (≤65 years) who presented in sinus rhythm had a 96% probability of surviving at least 10 years if they survived their first year following surgery. In contrast, recent female smokers (>65 years) with preoperative atrial fibrillation only had an 11% probability of surviving beyond 10 years if alive after one year post-surgery. In the context of an increasingly managed healthcare environment, conditional survival provides useful information for patients needing to make important treatment decisions, physicians seeking to select patients most likely to benefit long-term following RMVP, and hospital administrators needing to comparatively assess the life-course economic value of high-tech surgical procedures.

  20. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    Science.gov (United States)

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  2. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  3. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  4. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  5. Experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO2 lasers under conditions of minimal roughness

    International Nuclear Information System (INIS)

    Golyshev, A A; Malikov, A G; Orishich, A M; Shulyatyev, V B

    2014-01-01

    The results of an experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO 2 lasers are generalised. The dependence of roughness of the cut surface on the cutting parameters is investigated, and the conditions under which the surface roughness is minimal are formulated. It is shown that for both types of lasers these conditions can be expressed in the same way in terms of the dimensionless variables – the Péclet number Pe and the output power Q of laser radiation per unit thickness of the cut sheet – and take the form of the similarity laws: Pe = const, Q = const. The optimal values of Pe and Q are found. We have derived empirical expressions that relate the laser power and cutting speed with the thickness of the cut sheet under the condition of minimal roughness in the case of cutting by means of radiation from fibre and CO 2 lasers. (laser technologies)

  6. Inversion of Love wave phase velocity using smoothness-constrained least-squares technique; Heikatsuka seiyakutsuki saisho jijoho ni yoru love ha iso sokudo no inversion

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)

    1996-10-01

    Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.

  7. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    International Nuclear Information System (INIS)

    Qi Pei-Han; Zheng Shi-Lian; Yang Xiao-Niu; Zhao Zhi-Jin

    2016-01-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. (paper)

  8. Fundamental relativistic rotator: Hessian singularity and the issue of the minimal interaction with electromagnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Bratek, Lukasz, E-mail: lukasz.bratek@ifj.edu.pl [Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Radzikowskego 152, PL-31342 Krakow (Poland)

    2011-05-13

    There are two relativistic rotators with Casimir invariants of the Poincare group being fixed parameters. The particular models of spinning particles were studied in the past both at the classical and quantum level. Recently, a minimal interaction with electromagnetic field has been considered. We show that the dynamical systems can be uniquely singled out from among other relativistic rotators by the unphysical requirement that the Hessian referring to the physical degrees of freedom should be singular. Closely related is the fact that the equations of free motion are not independent, making the evolution indeterminate. We show that the Hessian singularity cannot be removed by the minimal interaction with the electromagnetic field. By making use of a nontrivial Hessian null space, we show that a single constraint appears in the external field for consistency of the equations of motion with the Hessian singularity. The constraint imposes unphysical limitation on the initial conditions and admissible motions. We discuss the mechanism of appearance of unique solutions in external fields on an example of motion in the uniform magnetic field. We give a simple model to illustrate that similarly constrained evolution cannot be determinate in arbitrary fields.

  9. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  10. Constrained supersymmetric flipped SU(5) GUT phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [CERN, TH Division, PH Department, Geneva 23 (Switzerland); King' s College London, Theoretical Physics and Cosmology Group, Department of Physics, London (United Kingdom); Mustafayev, Azar [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Olive, Keith A. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States); Stanford University, Department of Physics and SLAC, Palo Alto, CA (United States)

    2011-07-15

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M{sub in}, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tau}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2},m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M{sub in}, as we illustrate for several cases with tan {beta}=10 and 55. However, these features do not necessarily disappear at large M{sub in}, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  11. Constrained supersymmetric flipped SU(5) GUT phenomenology

    International Nuclear Information System (INIS)

    Ellis, John; Mustafayev, Azar; Olive, Keith A.

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, M in , above the GUT scale, M GUT . We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino χ and the lighter stau τ 1 is sensitive to M in , as is the relationship between m χ and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m 1/2 ,m 0 ) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to M in , as we illustrate for several cases with tan β=10 and 55. However, these features do not necessarily disappear at large M in , unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses. (orig.)

  12. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  13. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  14. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  15. Proposed minimal diagnostic criteria for myelodysplastic syndromes (MDS) and potential pre-MDS conditions.

    Science.gov (United States)

    Valent, Peter; Orazi, Attilio; Steensma, David P; Ebert, Benjamin L; Haase, Detlef; Malcovati, Luca; van de Loosdrecht, Arjan A; Haferlach, Torsten; Westers, Theresia M; Wells, Denise A; Giagounidis, Aristoteles; Loken, Michael; Orfao, Alberto; Lübbert, Michael; Ganser, Arnold; Hofmann, Wolf-Karsten; Ogata, Kiyoyuki; Schanz, Julie; Béné, Marie C; Hoermann, Gregor; Sperr, Wolfgang R; Sotlar, Karl; Bettelheim, Peter; Stauder, Reinhard; Pfeilstöcker, Michael; Horny, Hans-Peter; Germing, Ulrich; Greenberg, Peter; Bennett, John M

    2017-09-26

    Myelodysplastic syndromes (MDS) comprise a heterogeneous group of myeloid neoplasms characterized by peripheral cytopenia, dysplasia, and a variable clinical course with about 30% risk to transform to secondary acute myeloid leukemia (AML). In the past 15 years, diagnostic evaluations, prognostication, and treatment of MDS have improved substantially. However, with the discovery of molecular markers and advent of novel targeted therapies, new challenges have emerged in the complex field of MDS. For example, MDS-related molecular lesions may be detectable in healthy individuals and increase in prevalence with age. Other patients exhibit persistent cytopenia of unknown etiology without dysplasia. Although these conditions are potential pre-phases of MDS they may also transform into other bone marrow neoplasms. Recently identified molecular, cytogenetic, and flow-based parameters may add in the delineation and prognostication of these conditions. However, no generally accepted integrated classification and no related criteria are as yet available. In an attempt to address this challenge, an international consensus group discussed these issues in a working conference in July 2016. The outcomes of this conference are summarized in the present article which includes criteria and a proposal for the classification of pre-MDS conditions as well as updated minimal diagnostic criteria of MDS. Moreover, we propose diagnostic standards to delineate between ´normal´, pre-MDS, and MDS. These standards and criteria should facilitate diagnostic and prognostic evaluations in clinical studies as well as in clinical practice.

  16. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  17. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  18. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  19. Experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO{sub 2} lasers under conditions of minimal roughness

    Energy Technology Data Exchange (ETDEWEB)

    Golyshev, A A; Malikov, A G; Orishich, A M; Shulyatyev, V B [S.A. Khristianovich Institute of Theoretical and Applied Mechanics, Siberian Branch, Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2014-10-31

    The results of an experimental study of laser-oxygen cutting of low-carbon steel using fibre and CO{sub 2} lasers are generalised. The dependence of roughness of the cut surface on the cutting parameters is investigated, and the conditions under which the surface roughness is minimal are formulated. It is shown that for both types of lasers these conditions can be expressed in the same way in terms of the dimensionless variables – the Péclet number Pe and the output power Q of laser radiation per unit thickness of the cut sheet – and take the form of the similarity laws: Pe = const, Q = const. The optimal values of Pe and Q are found. We have derived empirical expressions that relate the laser power and cutting speed with the thickness of the cut sheet under the condition of minimal roughness in the case of cutting by means of radiation from fibre and CO{sub 2} lasers. (laser technologies)

  20. Restoration ecology: two-sex dynamics and cost minimization.

    Directory of Open Access Journals (Sweden)

    Ferenc Molnár

    Full Text Available We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  1. Restoration ecology: two-sex dynamics and cost minimization.

    Science.gov (United States)

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  2. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  3. Resource Constrained Planning of Multiple Projects with Separable Activities

    Science.gov (United States)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  4. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  5. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  6. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  7. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  8. Abelian groups with a minimal generating set | Ruzicka ...

    African Journals Online (AJOL)

    We study the existence of minimal generating sets in Abelian groups. We prove that Abelian groups with minimal generating sets are not closed under quotients, nor under subgroups, nor under infinite products. We give necessary and sufficient conditions for existence of a minimal generating set providing that the Abelian ...

  9. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC

    2008-05-15

    We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.

  10. Constraining the mSUGRA (minimal supergravity) parameter space using the entropy of dark matter halos

    International Nuclear Information System (INIS)

    Núñez, Darío; Zavala, Jesús; Nellen, Lukas; Sussman, Roberto A; Cabral-Rosetti, Luis G; Mondragón, Myriam

    2008-01-01

    We derive an expression for the entropy of a dark matter halo described using a Navarro–Frenk–White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2σ bounds for the abundance of dark matter: 0.112≤Ω DM h 2 ≤0.122, we are able to clearly identify validity regions among the values of tanβ, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tanβ are not favored; only for tan β ≃ 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m χ ≥141 GeV

  11. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  12. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  13. Operant Conditioning: A Minimal Components Requirement in Artificial Spiking Neurons Designed for Bio-Inspired Robot’s Controller

    Directory of Open Access Journals (Sweden)

    André eCyr

    2014-07-01

    Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.

  14. Constrained Supersymmetric Flipped SU(5) GUT Phenomenology

    CERN Document Server

    Ellis, John; Olive, Keith A

    2011-01-01

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, $M_{in}$, above the GUT scale, $M_{GUT}$. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino and the lighter stau is sensitive to $M_{in}$, as is the relationship between the neutralino mass and the masses of the heavier Higgs bosons. For these reasons, prominent features in generic $(m_{1/2}, m_0)$ planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to $M_{in}$, as we illustrate for several cases with tan(beta)...

  15. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells.

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-08-09

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs.

  16. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  17. Minimal mirror twin Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

  18. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  19. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  20. Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

    International Nuclear Information System (INIS)

    Bélanger, G.; Hugonie, C.; Pukhov, A.

    2009-01-01

    We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and susy particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC

  1. Effective theory of flavor for Minimal Mirror Twin Higgs

    Science.gov (United States)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  2. Constrained choices? Linking employees' and spouses' work time to health behaviors.

    Science.gov (United States)

    Fan, Wen; Lam, Jack; Moen, Phyllis; Kelly, Erin; King, Rosalind; McHale, Susan

    2015-02-01

    There are extensive literatures on work conditions and health and on family contexts and health, but less research asking how a spouse or partners' work conditions may affect health behaviors. Drawing on the constrained choices framework, we theorized health behaviors as a product of one's own time and spouses' work time as well as gender expectations. We examined fast food consumption and exercise behaviors using survey data from 429 employees in an Information Technology (IT) division of a U.S. Fortune 500 firm and from their spouses. We found fast food consumption is affected by men's work hours-both male employees' own work hours and the hours worked by husbands of women respondents-in a nonlinear way. The groups most likely to eat fast food are men working 50 h/week and women whose husbands work 45-50 h/week. Second, exercise is better explained if work time is conceptualized at the couple, rather than individual, level. In particular, neo-traditional arrangements (where husbands work longer than their wives) constrain women's ability to engage in exercise but increase odds of men exercising. Women in couples where both partners are working long hours have the highest odds of exercise. In addition, women working long hours with high schedule control are more apt to exercise and men working long hours whose wives have high schedule flexibility are as well. Our findings suggest different health behaviors may have distinct antecedents but gendered work-family expectations shape time allocations in ways that promote men's and constrain women's health behaviors. They also suggest the need to expand the constrained choices framework to recognize that long hours may encourage exercise if both partners are looking to sustain long work hours and that work resources, specifically schedule control, of one partner may expand the choices of the other. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  4. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  5. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  6. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  7. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  8. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  9. Stochastic risk-averse coordinated scheduling of grid integrated energy storage units in transmission constrained wind-thermal systems within a conditional value-at-risk framework

    International Nuclear Information System (INIS)

    Hemmati, Reza; Saboori, Hedayat; Saboori, Saeid

    2016-01-01

    In recent decades, wind power resources have been integrated in the power systems increasingly. Besides confirmed benefits, utilization of large share of this volatile source in power generation portfolio has been faced system operators with new challenges in terms of uncertainty management. It is proved that energy storage systems are capable to handle projected uncertainty concerns. Risk-neutral methods have been proposed in the previous literature to schedule storage units considering wind resources uncertainty. Ignoring risk of the cost distributions with non-desirable properties may result in experiencing high costs in some unfavorable scenarios with high probability. In order to control the risk of the operator decisions, this paper proposes a new risk-constrained two-stage stochastic programming model to make optimal decisions on energy storage and thermal units in a transmission constrained hybrid wind-thermal power system. Risk-aversion procedure is explicitly formulated using the conditional value-at-risk measure, because of possessing distinguished features compared to the other risk measures. The proposed model is a mixed integer linear programming considering transmission network, thermal unit dynamics, and storage devices constraints. The simulations results demonstrate that taking the risk of the problem into account will affect scheduling decisions considerably depend on the level of the risk-aversion. - Highlights: • Risk of the operation decisions is handled by using risk-averse programming. • Conditional value-at-risk is used as risk measure. • Optimal risk level is obtained based on the cost/benefit analysis. • The proposed model is a two-stage stochastic mixed integer linear programming. • The unit commitment is integrated with ESSs and wind power penetration.

  10. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  11. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-01-01

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs. PMID:28773795

  12. Initial conditions for cosmological perturbations

    Science.gov (United States)

    Ashtekar, Abhay; Gupt, Brajesh

    2017-02-01

    Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations.

  13. Initial conditions for cosmological perturbations

    International Nuclear Information System (INIS)

    Ashtekar, Abhay; Gupt, Brajesh

    2017-01-01

    Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations . (paper)

  14. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  15. Mental skills training effectively minimizes operative performance deterioration under stressful conditions: Results of a randomized controlled study.

    Science.gov (United States)

    Anton, N E; Beane, J; Yurco, A M; Howley, L D; Bean, E; Myers, E M; Stefanidis, D

    2018-02-01

    Stress can negatively impact surgical performance, but mental skills may help. We hypothesized that a comprehensive mental skills curriculum (MSC) would minimize resident performance deterioration under stress. Twenty-four residents were stratified then randomized to receive mental skills and FLS training (MSC group), or only FLS training (control group). Laparoscopic suturing skill was assessed on a live porcine model with and without external stressors. Outcomes were compared with t-tests. Twenty-three residents completed the study. The groups were similar at baseline. There were no differences in suturing at posttest or transfer test under normal conditions. Both groups experienced significantly decreased performance when stress was applied, but the MSC group significantly outperformed controls under stress. This MSC enabled residents to perform significantly better than controls in the simulated OR under unexpected stressful conditions. These findings support the use of psychological skills as an integral part of a surgical resident training. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  17. The Use of Trust Regions in Kohn-Sham Total Energy Minimization

    International Nuclear Information System (INIS)

    Yang, Chao; Meza, Juan C.; Wang, Lin-wang

    2006-01-01

    The Self Consistent Field (SCF) iteration, widely used for computing the ground state energy and the corresponding single particle wave functions associated with a many-electron atomistic system, is viewed in this paper as an optimization procedure that minimizes the Kohn-Sham total energy indirectly by minimizing a sequence of quadratic surrogate functions. We point out the similarity and difference between the total energy and the surrogate, and show how the SCF iteration can fail when the minimizer of the surrogate produces an increase in the KS total energy. A trust region technique is introduced as a way to restrict the update of the wave functions within a small neighborhood of an approximate solution at which the gradient of the total energy agrees with that of the surrogate. The use of trust region in SCF is not new. However, it has been observed that directly applying a trust region based SCF(TRSCF) to the Kohn-Sham total energy often leads to slow convergence. We propose to use TRSCF within a direct constrained minimization(DCM) algorithm we developed in dcm. The key ingredients of the DCM algorithm involve projecting the total energy function into a sequence of subspaces of small dimensions and seeking the minimizer of the total energy function within each subspace. The minimizer of a subspace energy function, which is computed by TRSCF, not only provides a search direction along which the KS total energy function decreases but also gives an optimal 'step-length' that yields a sufficient decrease in total energy. A numerical example is provided to demonstrate that the combination of TRSCF and DCM is more efficient than SCF

  18. Antifungal susceptibility testing method for resource constrained laboratories

    Directory of Open Access Journals (Sweden)

    Khan S

    2006-01-01

    Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.

  19. The re-emergence of the minimal running shoe.

    Science.gov (United States)

    Davis, Irene S

    2014-10-01

    The running shoe has gone through significant changes since its inception. The purpose of this paper is to review these changes, the majority of which have occurred over the past 50 years. Running footwear began as very minimal, then evolved to become highly cushioned and supportive. However, over the past 5 years, there has been a reversal of this trend, with runners seeking more minimal shoes that allow their feet more natural motion. This abrupt shift toward footwear without cushioning and support has led to reports of injuries associated with minimal footwear. In response to this, the running footwear industry shifted again toward the development of lightweight, partial minimal shoes that offer some support and cushioning. In this paper, studies comparing the mechanics between running in minimal, partial minimal, and traditional shoes are reviewed. The implications for injuries in all 3 conditions are examined. The use of minimal footwear in other populations besides runners is discussed. Finally, areas for future research into minimal footwear are suggested.

  20. Optimal experiment design for quantum state tomography: Fair, precise, and minimal tomography

    International Nuclear Information System (INIS)

    Nunn, J.; Smith, B. J.; Puentes, G.; Walmsley, I. A.; Lundeen, J. S.

    2010-01-01

    Given an experimental setup and a fixed number of measurements, how should one take data to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al.[R. Kosut, I. Walmsley, and H. Rabitz, e-print arXiv:quant-ph/0411093 (2004)]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. Monte Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum-likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.

  1. Minimal Coleman-Weinberg theory explains the diphoton excess

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Sannino, Francesco

    2016-01-01

    It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require the introdu......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...

  2. Solar system tests for realistic f(T) models with non-minimal torsion-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Rui-Hui; Zhai, Xiang-Hua; Li, Xin-Zhou [Shanghai Normal University, Shanghai United Center for Astrophysics (SUCA), Shanghai (China)

    2017-08-15

    In the previous paper, we have constructed two f(T) models with non-minimal torsion-matter coupling extension, which are successful in describing the evolution history of the Universe including the radiation-dominated era, the matter-dominated era, and the present accelerating expansion. Meantime, the significant advantage of these models is that they could avoid the cosmological constant problem of ΛCDM. However, the non-minimal coupling between matter and torsion will affect the tests of the Solar system. In this paper, we study the effects of the Solar system in these models, including the gravitation redshift, geodetic effect and perihelion precession. We find that Model I can pass all three of the Solar system tests. For Model II, the parameter is constrained by the uncertainties of the planets' estimated perihelion precessions. (orig.)

  3. Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    Purpose: The authors develop and investigate iterative image reconstruction algorithms based on data-discrepancy minimization with a total-variation (TV) constraint. The various algorithms are derived with different data-discrepancy measures reflecting the maximum likelihood (ML) principle......: An incremental algorithm framework is developed for this purpose. The instances of the incremental algorithms are derived for solving optimization problems including a data fidelity objective function combined with a constraint on the image TV. For the data fidelity term the authors, compare application....... Simulations demonstrate the iterative algorithms and the resulting image statistical properties for low-dose CT data acquired with sparse projection view angle sampling. Of particular interest is to quantify improvement of image statistical properties by use of the ML data fidelity term. Methods...

  4. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  5. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  6. Minimally Invasive Surgery (MIS) Approaches to Thoracolumbar Trauma.

    Science.gov (United States)

    Kaye, Ian David; Passias, Peter

    2018-03-01

    Minimally invasive surgical (MIS) techniques offer promising improvements in the management of thoracolumbar trauma. Recent advances in MIS techniques and instrumentation for degenerative conditions have heralded a growing interest in employing these techniques for thoracolumbar trauma. Specifically, surgeons have applied these techniques to help manage flexion- and extension-distraction injuries, neurologically intact burst fractures, and cases of damage control. Minimally invasive surgical techniques offer a means to decrease blood loss, shorten operative time, reduce infection risk, and shorten hospital stays. Herein, we review thoracolumbar minimally invasive surgery with an emphasis on thoracolumbar trauma classification, minimally invasive spinal stabilization, surgical indications, patient outcomes, technical considerations, and potential complications.

  7. Enhancing the efficiency of constrained dual-hop variable-gain AF relaying under nakagami-m fading

    KAUST Repository

    Zafar, Ammar

    2014-07-01

    This paper studies power allocation for performance constrained dual-hop variable-gain amplify-and-forward (AF) relay networks in Nakagami- $m$ fading. In this context, the performance constraint is formulated as a constraint on the end-to-end signal-to-noise-ratio (SNR) and the overall power consumed is minimized while maintaining this constraint. This problem is considered under two different assumptions of the available channel state information (CSI) at the relays, namely full CSI at the relays and partial CSI at the relays. In addition to the power minimization problem, we also consider the end-to-end SNR maximization problem under a total power constraint for the partial CSI case. We provide closed-form solutions for all the problems which are easy to implement except in two cases, namely selective relaying with partial CSI for power minimization and SNR maximization, where we give the solution in the form of a one-variable equation which can be solved efficiently. Numerical results are then provided to characterize the performance of the proposed power allocation algorithms considering the effects of channel parameters and CSI availability. © 2014 IEEE.

  8. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  9. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  10. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  11. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  14. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  15. Preservation or Restoration of Segmental and Regional Spinal Lordosis Using Minimally Invasive Interbody Fusion Techniques in Degenerative Lumbar Conditions: A Literature Review.

    Science.gov (United States)

    Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A

    2016-04-01

    A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained

  16. Inclusions in diamonds constrain thermo-chemical conditions during Mesozoic metasomatism of the Kaapvaal cratonic mantle

    Science.gov (United States)

    Weiss, Yaakov; Navon, Oded; Goldstein, Steven L.; Harris, Jeff W.

    2018-06-01

    Fluid/melt inclusions in diamonds, which were encapsulated during a metasomatic event and over a short period of time, are isolated from their surrounding mantle, offering the opportunity to constrain changes in the sub-continental lithospheric mantle (SCLM) that occurred during individual thermo-chemical events, as well as the composition of the fluids involved and their sources. We have analyzed a suite of 8 microinclusion-bearing diamonds from the Group I De Beers Pool kimberlites, South Africa, using FTIR, EPMA and LA-ICP-MS. Seven of the diamonds trapped incompatible-element-enriched saline high density fluids (HDFs), carry peridotitic mineral microinclusions, and substitutional nitrogen almost exclusively in A-centers. This low-aggregation state of nitrogen indicates a short mantle residence times and/or low mantle ambient temperature for these diamonds. A short residence time is favored because, elevated thermal conditions prevailed in the South African lithosphere during and following the Karoo flood basalt volcanism at ∼180 Ma, thus the saline metasomatism must have occurred close to the time of kimberlite eruptions at ∼85 Ma. Another diamond encapsulated incompatible-element-enriched silicic HDFs and has 25% of its nitrogen content residing in B-centers, implying formation during an earlier and different metasomatic event that likely relates to the Karoo magmatism at ca. 180 Ma. Thermometry of mineral microinclusions in the diamonds carrying saline HDFs, based on Mg-Fe exchange between garnet-orthopyroxene (Opx)/clinopyroxene (Cpx)/olivine and the Opx-Cpx thermometer, yield temperatures between 875-1080 °C at 5 GPa. These temperatures overlap with conditions recorded by touching inclusion pairs in diamonds from the De Beers Pool kimberlites, which represent the mantle ambient conditions just before eruption, and are altogether lower by 150-250 °C compared to P-T gradients recorded by peridotite xenoliths from the same locality. Oxygen fugacity (fO2

  17. Nerve Cells Decide to Orient inside an Injectable Hydrogel with Minimal Structural Guidance.

    Science.gov (United States)

    Rose, Jonas C; Cámara-Torres, María; Rahimi, Khosrow; Köhler, Jens; Möller, Martin; De Laporte, Laura

    2017-06-14

    Injectable biomaterials provide the advantage of a minimally invasive application but mostly lack the required structural complexity to regenerate aligned tissues. Here, we report a new class of tissue regenerative materials that can be injected and form an anisotropic matrix with controlled dimensions using rod-shaped, magnetoceptive microgel objects. Microgels are doped with small quantities of superparamagnetic iron oxide nanoparticles (0.0046 vol %), allowing alignment by external magnetic fields in the millitesla order. The microgels are dispersed in a biocompatible gel precursor and after injection and orientation are fixed inside the matrix hydrogel. Regardless of the low volume concentration of the microgels below 3%, at which the geometrical constrain for orientation is still minimum, the generated macroscopic unidirectional orientation is strongly sensed by the cells resulting in parallel nerve extension. This finding opens a new, minimal invasive route for therapy after spinal cord injury.

  18. How market environment may constrain global franchising in emerging markets

    OpenAIRE

    Baena Graciá, Verónica

    2011-01-01

    Although emerging markets are some of the fastest growing economies in the world and represent countries that are experiencing a substantial economic transformation, little is known about the factors influencing country selection for expansion in those markets. In an attempt to enhance the knowledge that managers and scholars have on franchising expansion, the present study examines how market conditions may constrain international diffusion of franchising in emerging markets. They are: i) ge...

  19. Minimal solution of general dual fuzzy linear systems

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Otadi, M.; Mosleh, M.

    2008-01-01

    Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered

  20. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  1. Optimization of Vacuum Impregnation with Calcium Lactate of Minimally Processed Melon and Shelf-Life Study in Real Storage Conditions.

    Science.gov (United States)

    Tappi, Silvia; Tylewicz, Urszula; Romani, Santina; Siroli, Lorenzo; Patrignani, Francesca; Dalla Rosa, Marco; Rocculi, Pietro

    2016-10-05

    Vacuum impregnation (VI) is a processing operation that permits the impregnation of fruit and vegetable porous tissues with a fast and more homogeneous penetration of active compounds compared to the classical diffusion processes. The objective of this research was to investigate the impact on VI treatment with the addition of calcium lactate on qualitative parameters of minimally processed melon during storage. For this aim, this work was divided in 2 parts. Initially, the optimization of process parameters was carried out in order to choose the optimal VI conditions for improving texture characteristics of minimally processed melon that were then used to impregnate melons for a shelf-life study in real storage conditions. On the basis of a 2 3 factorial design, the effect of Calcium lactate (CaLac) concentration between 0% and 5% and of minimum pressure (P) between 20 and 60 MPa were evaluated on color and texture. Processing parameters corresponding to 5% CaLac concentration and 60 MPa of minimum pressure were chosen for the storage study, during which the modifications of main qualitative parameters were evaluated. Despite of the high variability of the raw material, results showed that VI allowed a better maintenance of texture during storage. Nevertheless, other quality traits were negatively affected by the application of vacuum. Impregnated products showed a darker and more translucent appearance on the account of the alteration of the structural properties. Moreover microbial shelf-life was reduced to 4 d compared to the 7 obtained for control and dipped samples. © 2016 Institute of Food Technologists®.

  2. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  3. On gauge fixing and quantization of constrained Hamiltonian systems

    International Nuclear Information System (INIS)

    Dayi, O.F.

    1989-06-01

    In constrained Hamiltonian systems which possess first class constraints some subsidiary conditions should be imposed for detecting physical observables. This issue and quantization of the system are clarified. It is argued that the reduced phase space and Dirac method of quantization, generally, differ only in the definition of the Hilbert space one should use. For the dynamical systems possessing second class constraints the definition of physical Hilbert space in the BFV-BRST operator quantization method is different from the usual definition. (author). 18 refs

  4. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    International Nuclear Information System (INIS)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry

    2015-01-01

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments

  5. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Ratcliff, Laura E., E-mail: lratcliff@anl.gov [Argonne Leadership Computing Facility, Argonne National Laboratory, Lemont, Illinois 60439 (United States); Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry [Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France)

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.

  6. Scheduling Aircraft Landings under Constrained Position Shifting

    Science.gov (United States)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  7. Should we still believe in constrained supersymmetry?

    International Nuclear Information System (INIS)

    Balazs, Csaba; Buckley, Andy; Carter, Daniel; Farmer, Benjamin; White, Martin

    2013-01-01

    We calculate partial Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the partial Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with M 0 and M 1/2 below 2 TeV), reducing its posterior odds by approximately two orders of magnitude. This reduction is largely due to substantial Occam factors induced by the LEP and LHC Higgs searches. (orig.)

  8. A Defense of Semantic Minimalism

    Science.gov (United States)

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  9. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  10. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  11. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  12. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  13. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    Science.gov (United States)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  14. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  15. Sufficient Descent Conjugate Gradient Methods for Solving Convex Constrained Nonlinear Monotone Equations

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.

  16. How word-beginnings constrain the pronunciations of word-ends in the reading aloud of English: the phenomena of head- and onset-conditioning

    Directory of Open Access Journals (Sweden)

    Anastasia Ulicheva

    2015-12-01

    Full Text Available Background. A word whose body is pronounced in different ways in different words is body-inconsistent. When we take the unit that precedes the vowel into account for the calculation of body-consistency, the proportion of English words that are body-inconsistent is considerably reduced at the level of corpus analysis, prompting the question of whether humans actually use such head/onset-conditioning when they read.Methods. Four metrics for head/onset-constrained body-consistency were calculated: by the last grapheme of the head, by the last phoneme of the onset, by place and manner of articulation of the last phoneme of the onset, and by manner of articulation of the last phoneme of the onset. Since these were highly correlated, principal component analysis was performed on them.Results. Two out of four resulting principal components explained significant variance in the reading-aloud reaction times, beyond regularity and body-consistency.Discussion. Humans read head/onset-conditioned words faster than would be predicted based on their body-consistency and regularity only. We conclude that humans are sensitive to the dependency between word-beginnings and word-ends when they read aloud, and that this dependency is phonological in nature, rather than orthographic.

  17. Minimally inconsistent reasoning in Semantic Web.

    Science.gov (United States)

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  18. Singlet fermionic dark matter with Veltman conditions

    Science.gov (United States)

    Kim, Yeong Gyun; Lee, Kang Young; Nam, Soo-hyeon

    2018-07-01

    We reexamine a renormalizable model of a fermionic dark matter with a gauge singlet Dirac fermion and a real singlet scalar which can ameliorate the scalar mass hierarchy problem of the Standard Model (SM). Our model setup is the minimal extension of the SM for which a realistic dark matter (DM) candidate is provided and the cancellation of one-loop quadratic divergence to the scalar masses can be achieved by the Veltman condition (VC) simultaneously. This model extension, although renormalizable, can be considered as an effective low-energy theory valid up to cut-off energies about 10 TeV. We calculate the one-loop quadratic divergence contributions of the new scalar and fermionic DM singlets, and constrain the model parameters using the VC and the perturbative unitarity conditions. Taking into account the invisible Higgs decay measurement, we show the allowed region of new physics parameters satisfying the recent measurement of relic abundance. With the obtained parameter set, we predict the elastic scattering cross section of the new singlet fermion into target nuclei for a direct detection of the dark matter. We also perform the full analysis with arbitrary set of parameters without the VC as a comparison, and discuss the implication of the constraints by the VC in detail.

  19. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  20. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  1. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  2. How CMB and large-scale structure constrain chameleon interacting dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Boriero, Daniel [Fakultät für Physik, Universität Bielefeld, Universitätstr. 25, Bielefeld (Germany); Das, Subinoy [Indian Institute of Astrophisics, Bangalore, 560034 (India); Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.

  3. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  4. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  5. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  6. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  7. Minimally inconsistent reasoning in Semantic Web.

    Directory of Open Access Journals (Sweden)

    Xiaowang Zhang

    Full Text Available Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical description logic reasoning.

  8. Traversable geometric dark energy wormholes constrained by astrophysical observations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Meng, Xin-he [Nankai University, Department of Physics, Tianjin (China); Institute of Theoretical Physics, CAS, State Key Lab of Theoretical Physics, Beijing (China)

    2016-09-15

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω < -1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω{sub X} < -1 (or z < 0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology. (orig.)

  9. Traversable geometric dark energy wormholes constrained by astrophysical observations

    International Nuclear Information System (INIS)

    Wang, Deng; Meng, Xin-he

    2016-01-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω < -1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω X < -1 (or z < 0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology. (orig.)

  10. The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI

    Directory of Open Access Journals (Sweden)

    Dietmar Cordes

    2012-01-01

    Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.

  11. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... for hard input constraints. In more detail, we parameterise the MPC predictions in terms of the primary Youla parameter and use this parameter as the on-line optimisation variable. The uncertainty is parameterised in terms of the dual Youla parameter. Stability can then be guaranteed through small gain...

  12. Dimensionally constrained energy confinement analysis of W7-AS data

    International Nuclear Information System (INIS)

    Dose, V.; Preuss, R.; Linden, W. von der

    1998-01-01

    A recently assembled W7-AS stellarator database has been subject to dimensionally constrained confinement analysis. The analysis employs Bayesian inference. Dimensional information is taken from the Connor-Taylor (CT) similarity transformation theory, which provides six possible physical scenarios with associated dimensional conditions. Bayesian theory allows the calculations of the probability for each model and it is found that the present W7-AS data are most probably described by the collisionless high-β case. Probabilities for all models and the associated exponents of a power law scaling function are presented. (author)

  13. Theories of minimalism in architecture: When prologue becomes palimpsest

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2014-01-01

    Full Text Available This paper examines the modus and conditions of constituting and establishing architectural discourse on minimalism. One of the key topics in this discourse are historical line of development and the analysis of theoretical influences, which comprise connections of recent minimalism with the theorizations of various minimal, architectural and artistic, forms and concepts from the past. The paper shall particularly discuss those theoretical relations which, in a unitary way, link minimalism in architecture with its artistic nominal counterpart - minimal art. These are the relations founded on the basis of interpretative models on self-referentiality, phenomenological experience and contextualism, which are superficialy observed, common to both, artistic and architectural, minimalist discourses. It seems that in this constellation certain relations on the historical line of minimalism in architecture are questionable, while some other are overlooked. Precisely, posmodern fundamentalism is the architectural direction: 1 in which these three interpretations also existed; 2 from which architectural theorists retroactively appropriated many architects proclaiming them minimalists; 3 which establish identical relations with modern and postmodern theoretical and socio-historical contexts, as well as it will be done in minimalism. In spite of this, theoretical field of postmodern fundamentalism is surprisingly neglected in the discourse of minimalism in architecture. Instead of understanding postmodern fundamentalism as a kind of prologue to minimalism in architecture, it becomes an erased palimpsest over whom the different history of minimalism is rewriting, the history in which minimal art which occupies a central place.

  14. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  15. The cost of proactive interference is constant across presentation conditions.

    Science.gov (United States)

    Endress, Ansgar D; Siddique, Aneela

    2016-10-01

    Proactive interference (PI) severely constrains how many items people can remember. For example, Endress and Potter (2014a) presented participants with sequences of everyday objects at 250ms/picture, followed by a yes/no recognition test. They manipulated PI by either using new images on every trial in the unique condition (thus minimizing PI among items), or by re-using images from a limited pool for all trials in the repeated condition (thus maximizing PI among items). In the low-PI unique condition, the probability of remembering an item was essentially independent of the number of memory items, showing no clear memory limitations; more traditional working memory-like memory limitations appeared only in the high-PI repeated condition. Here, we ask whether the effects of PI are modulated by the availability of long-term memory (LTM) and verbal resources. Participants viewed sequences of 21 images, followed by a yes/no recognition test. Items were presented either quickly (250ms/image) or sufficiently slowly (1500ms/image) to produce LTM representations, either with or without verbal suppression. Across conditions, participants performed better in the unique than in the repeated condition, and better for slow than for fast presentations. In contrast, verbal suppression impaired performance only with slow presentations. The relative cost of PI was remarkably constant across conditions: relative to the unique condition, performance in the repeated condition was about 15% lower in all conditions. The cost of PI thus seems to be a function of the relative strength or recency of target items and interfering items, but relatively insensitive to other experimental manipulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  17. CAROTENOID RETENTION IN MINIMALLY PROCESSED BIOFORTIFIED GREEN CORN STORED UNDER RETAIL MARKETING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Natália Alves Barbosa

    2015-08-01

    Full Text Available Storing processed food products can cause alterations in their chemical compositions. Thus, the objective of this study was to evaluate carotenoid retention in the kernels of minimally processed normal and vitamin A precursor (proVA-biofortified green corn ears that were packaged in polystyrene trays covered with commercial film or in multilayered polynylon packaging material and were stored. Throughout the storage period, the carotenoids were extracted from the corn kernels using organic solvents and were quantified using HPLC. A completely factorial design including three factors (cultivar, packaging and storage period was applied for analysis. The green kernels of maize cultivars BRS1030 and BRS4104 exhibited similar carotenoid profiles, with zeaxanthin being the main carotenoid. Higher concentrations of the carotenoids lutein, β-cryptoxanthin, and β-carotene, the total carotenoids and the total vitamin A precursor carotenoids were detected in the green kernels of the biofortified BRS4104 maize. The packaging method did not affect carotenoid retention in the kernels of minimally processed green corn ears during the storage period.

  18. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  19. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  20. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  1. A field theory description of constrained energy-dissipation processes

    International Nuclear Information System (INIS)

    Mandzhavidze, I.D.; Sisakyan, A.N.

    2002-01-01

    A field theory description of dissipation processes constrained by a high-symmetry group is given. The formalism is presented in the example of the multiple-hadron production processes, where the transition to the thermodynamic equilibrium results from the kinetic energy of colliding particles dissipating into hadron masses. The dynamics of these processes is restricted because the constraints responsible for the colour charge confinement must be taken into account. We develop a more general S-matrix formulation of the thermodynamics of nonequilibrium dissipative processes and find a necessary and sufficient condition for the validity of this description; this condition is similar to the correlation relaxation condition, which, according to Bogolyubov, must apply as the system approaches equilibrium. This situation must physically occur in processes with an extremely high multiplicity, at least if the hadron mass is nonzero. We also describe a new strong-coupling perturbation scheme, which is useful for taking symmetry restrictions on the dynamics of dissipation processes into account. We review the literature devoted to this problem

  2. Minimization of heat slab nodes with higher order boundary conditions

    International Nuclear Information System (INIS)

    Solbrig, C.W.

    1992-01-01

    The accuracy of a numerical solution can be limited by the numerical approximation to the boundary conditions rather than the accuracy of the equations which describe the interior. The study presented in this paper compares the results from two different numerical formulations of the convective boundary condition on the face of a heat transfer slab. The standard representation of the boundary condition in a test problem yielded an unacceptable error even when the heat transfer slab was partitioned into over 300 nodes. A higher order boundary condition representation was obtained by using a second order approximation for the first derivative at the boundary and combining it with the general equation used for inner nodes. This latter formulation produced reasonable results when as few as ten nodes were used

  3. On relevant boundary perturbations of unitary minimal models

    International Nuclear Information System (INIS)

    Recknagel, A.; Roggenkamp, D.; Schomerus, V.

    2000-01-01

    We consider unitary Virasoro minimal models on the disk with Cardy boundary conditions and discuss deformations by certain relevant boundary operators, analogous to tachyon condensation in string theory. Concentrating on the least relevant boundary field, we can perform a perturbative analysis of renormalization group fixed points. We find that the systems always flow towards stable fixed points which admit no further (non-trivial) relevant perturbations. The new conformal boundary conditions are in general given by superpositions of 'pure' Cardy boundary conditions

  4. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  7. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    Science.gov (United States)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  8. Large non-Gaussianity in non-minimal inflation

    CERN Document Server

    Gong, Jinn-Ouk

    2011-01-01

    We consider a simple inflation model with a complex scalar field coupled to gravity non-minimally. Both the modulus and the angular directions of the complex scalar are slowly rolling, leading to two-field inflation. The modulus direction becomes flat due to the non-minimal coupling, and the angular direction becomes a pseudo-Goldstone boson from a small breaking of the global U(1) symmetry. We show that large non-Gaussianity can be produced during slow-roll inflation under a reasonable assumption on the initial condition of the angular direction. This scenario may be realized in particle physics models such as the Standard Model with two Higgs doublets.

  9. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  10. Preventive Security-Constrained Optimal Power Flow Considering UPFC Control Modes

    Directory of Open Access Journals (Sweden)

    Xi Wu

    2017-08-01

    Full Text Available The successful application of the unified power flow controller (UPFC provides a new control method for the secure and economic operation of power system. In order to make the full use of UPFC and improve the economic efficiency and static security of a power system, a preventive security-constrained power flow optimization method considering UPFC control modes is proposed in this paper. Firstly, an iterative method considering UPFC control modes is deduced for power flow calculation. Taking into account the influence of different UPFC control modes on the distribution of power flow after N-1 contingency, the optimization model is then constructed by setting a minimal system operation cost and a maximum static security margin as the objective. Based on this model, the particle swarm optimization (PSO algorithm is utilized to optimize power system operating parameters and UPFC control modes simultaneously. Finally, a standard IEEE 30-bus system is utilized to demonstrate that the proposed method fully exploits the potential of static control of UPFC and significantly increases the economic efficiency and static security of the power system.

  11. Fermilab Tevatron and CERN LEP II probes of minimal and string-motivated supergravity models

    International Nuclear Information System (INIS)

    Baer, H.; Gunion, J.F.; Kao, C.; Pois, H.

    1995-01-01

    We explore the ability of the Fermilab Tevatron to probe minimal supersymmetry with high-energy-scale boundary conditions motivated by supersymmetry breaking in the context of minimal and string-motivated supergravity theory. A number of boundary condition possibilities are considered: dilatonlike string boundary conditions applied at the standard GUT unification scale or alternatively at the string scale; and extreme (''no-scale'') minimal supergravity boundary conditions imposed at the GUT scale or string scale. For numerous specific cases within each scenario the sparticle spectra are computed and then fed into ISAGET 7.07 so that explicit signatures can be examined in detail. We find that, for some of the boundary condition choices, large regions of parameter space can be explored via same-sign dilepton and isolated trilepton signals. For other choices, the mass reach of Tevatron collider experiments is much more limited. We also compare the mass reach of Tevatron experiments with the corresponding reach at CERN LEP 200

  12. Downstream-Conditioned Maximum Entropy Method for Exit Boundary Conditions in the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Javier A. Dottori

    2015-01-01

    Full Text Available A method for modeling outflow boundary conditions in the lattice Boltzmann method (LBM based on the maximization of the local entropy is presented. The maximization procedure is constrained by macroscopic values and downstream components. The method is applied to fully developed boundary conditions of the Navier-Stokes equations in rectangular channels. Comparisons are made with other alternative methods. In addition, the new downstream-conditioned entropy is studied and it was found that there is a correlation with the velocity gradient during the flow development.

  13. Constrained Quadratic Programming and Neurodynamics-Based Solver for Energy Optimization of Biped Walking Robots

    Directory of Open Access Journals (Sweden)

    Liyang Wang

    2017-01-01

    Full Text Available The application of biped robots is always trapped by their high energy consumption. This paper makes a contribution by optimizing the joint torques to decrease the energy consumption without changing the biped gaits. In this work, a constrained quadratic programming (QP problem for energy optimization is formulated. A neurodynamics-based solver is presented to solve the QP problem. Differing from the existing literatures, the proposed neurodynamics-based energy optimization (NEO strategy minimizes the energy consumption and guarantees the following three important constraints simultaneously: (i the force-moment equilibrium equation of biped robots, (ii frictions applied by each leg on the ground to hold the biped robot without slippage and tipping over, and (iii physical limits of the motors. Simulations demonstrate that the proposed strategy is effective for energy-efficient biped walking.

  14. Variational minimization of atomic and molecular ground-state energies via the two-particle reduced density matrix

    International Nuclear Information System (INIS)

    Mazziotti, David A.

    2002-01-01

    Atomic and molecular ground-state energies are variationally determined by constraining the two-particle reduced density matrix (2-RDM) to satisfy positivity conditions. Because each positivity condition corresponds to correcting the ground-state energies for a class of Hamiltonians with two-particle interactions, these conditions collectively provide a new approach to many-body theory that, unlike perturbation theory, can capture significantly correlated phenomena including the multireference effects of potential-energy surfaces. The D, Q, and G conditions for the 2-RDM are extended through generalized lifting operators inspired from the formal solution of N-representability. These lifted conditions agree with the hierarchy of positivity conditions presented by Mazziotti and Erdahl [Phys. Rev. A 63, 042113 (2001)]. The connection between positivity and the formal solution explains how constraining higher RDMs to be positive semidefinite improves the N representability of the 2-RDM and suggests using pieces of higher positivity conditions that computationally scale like the D condition. With the D, Q, and G conditions as well as pieces of higher positivity the electronic energies for Be, LiH, H 2 O, and BH are computed through a primal-dual interior-point algorithm for positive semidefinite programming. The variational method produces potential-energy surfaces that are highly accurate even far from the equilibrium geometry where single-reference perturbation-based methods often fail to produce realistic energies

  15. Quantum cosmology of classically constrained gravity

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shang Yanwen

    2006-01-01

    In [G. Gabadadze, Y. Shang, hep-th/0506040] we discussed a classically constrained model of gravity. This theory contains known solutions of General Relativity (GR), and admits solutions that are absent in GR. Here we study cosmological implications of some of these new solutions. We show that a spatially-flat de Sitter universe can be created from 'nothing'. This universe has boundaries, and its total energy equals to zero. Although the probability to create such a universe is exponentially suppressed, it favors initial conditions suitable for inflation. Then we discuss a finite-energy solution with a nonzero cosmological constant and zero space-time curvature. There is no tunneling suppression to fluctuate into this state. We show that for a positive cosmological constant this state is unstable-it can rapidly transition to a de Sitter universe providing a new unsuppressed channel for inflation. For a negative cosmological constant the space-time flat solutions is stable.

  16. Topologically protected qubits as minimal Josephson junction arrays with non-trivial boundary conditions: A proposal

    Energy Technology Data Exchange (ETDEWEB)

    Cristofano, Gerardo; Marotta, Vincenzo [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , and INFN, Sezione di Napoli, Via Cintia, Complesso Universitario M. Sant' Angelo, 80126 Napoli (Italy); Naddeo, Adele [Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno and CNISM, Unita di Ricerca di Salerno, Via Salvador Allende, 84081 Baronissi (Italy)], E-mail: naddeo@sa.infn.it; Niccoli, Giuliano [Theoretical Physics Group, DESY, NotkeStrasse 85, 22603 Hamburg (Germany)

    2008-11-17

    Recently a one-dimensional closed ladder of Josephson junctions has been studied [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Phys. Lett. A 372 (2008) 2464] within a twisted conformal field theory (CFT) approach [G. Cristofano, G. Maiella, V. Marotta, Mod. Phys. Lett. A 15 (2000) 1679; G. Cristofano, G. Maiella, V. Marotta, G. Niccoli, Nucl. Phys. B 641 (2002) 547] and shown to develop the phenomenon of flux fractionalization [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Eur. Phys. J. B 49 (2006) 83]. That led us to predict the emergence of a topological order in such a system [G. Cristofano, V. Marotta, A. Naddeo, J. Stat. Mech.: Theory Exp. (2005) P03006]. In this Letter we analyze the ground states and the topological properties of fully frustrated Josephson junction arrays (JJA) arranged in a Corbino disk geometry for a variety of boundary conditions. In particular minimal configurations of fully frustrated JJA are considered and shown to exhibit the properties needed in order to build up a solid state qubit, protected from decoherence. The stability and transformation properties of the ground states of the JJA under adiabatic magnetic flux changes are analyzed in detail in order to provide a tool for the manipulation of the proposed qubit.

  17. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.

  18. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  19. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  20. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  1. Static elliptic minimal surfaces in AdS{sub 4}

    Energy Technology Data Exchange (ETDEWEB)

    Pastras, Georgios [NCSR ' ' Demokritos' ' , Institute of Nuclear and Particle Physics, Attiki (Greece)

    2017-11-15

    The Ryu-Takayanagi conjecture connects the entanglement entropy in the boundary CFT to the area of open co-dimension two minimal surfaces in the bulk. Especially in AdS{sub 4}, the latter are two-dimensional surfaces, and, thus, solutions of a Euclidean non-linear sigma model on a symmetric target space that can be reduced to an integrable system via Pohlmeyer reduction. In this work, we construct static minimal surfaces in AdS{sub 4} that correspond to elliptic solutions of the reduced system, namely the cosh-Gordon equation, via the inversion of Pohlmeyer reduction. The constructed minimal surfaces comprise a two-parameter family of surfaces that include helicoids and catenoids in H{sup 3} as special limits. Minimal surfaces that correspond to identical boundary conditions are discovered within the constructed family of surfaces and the relevant geometric phase transitions are studied. (orig.)

  2. Gravitational waves in Fully Constrained Formulation in a dynamical spacetime with matter content

    Energy Technology Data Exchange (ETDEWEB)

    Cordero-Carrion, Isabel; Cerda-Duran, Pablo [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85741, Garching (Germany); Ibanez, Jose MarIa, E-mail: chabela@mpa-garching.mpg.de, E-mail: cerda@mpa-garching.mpg.de, E-mail: jose.m.ibanez@uv.es [Departamento de AstronomIa y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain)

    2011-09-22

    We analyze numerically the behaviour of the hyperbolic sector of the Fully Constrained Formulation (FCF) (Bonazzola et al. 2004). The numerical experiments allow us to be confident in the performances of the upgraded version of the CoCoNuT code (Dimmelmeier et al. 2005) by replacing the Conformally Flat Condition (CFC), an approximation of Einstein equations, by FCF. First gravitational waves in FCF in a dynamical spacetime with matter content will be shown.

  3. Factorization of Constrained Energy K-Network Reliability with Perfect Nodes

    OpenAIRE

    Burgos, Juan Manuel

    2013-01-01

    This paper proves a new general K-network constrained energy reliability global factorization theorem. As in the unconstrained case, beside its theoretical mathematical importance the theorem shows how to do parallel processing in exact network constrained energy reliability calculations in order to reduce the processing time of this NP-hard problem. Followed by a new simple factorization formula for its calculation, we propose a new definition of constrained energy network reliability motiva...

  4. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  5. Conditioning of nuclear reactor fuel

    International Nuclear Information System (INIS)

    1975-01-01

    A method of conditioning the fuel of a nuclear reactor core to minimize failure of the fuel cladding comprising increasing the fuel rod power to a desired maximum power level at a rate below a critical rate which would cause cladding damage is given. Such conditioning allows subsequent freedom of power changes below and up to said maximum power level with minimized danger of cladding damage. (Auth.)

  6. Cross-constrained problems for nonlinear Schrodinger equation with harmonic potential

    Directory of Open Access Journals (Sweden)

    Runzhang Xu

    2012-11-01

    Full Text Available This article studies a nonlinear Schodinger equation with harmonic potential by constructing different cross-constrained problems. By comparing the different cross-constrained problems, we derive different sharp criterion and different invariant manifolds that separate the global solutions and blowup solutions. Moreover, we conclude that some manifolds are empty due to the essence of the cross-constrained problems. Besides, we compare the three cross-constrained problems and the three depths of the potential wells. In this way, we explain the gaps in [J. Shu and J. Zhang, Nonlinear Shrodinger equation with harmonic potential, Journal of Mathematical Physics, 47, 063503 (2006], which was pointed out in [R. Xu and Y. Liu, Remarks on nonlinear Schrodinger equation with harmonic potential, Journal of Mathematical Physics, 49, 043512 (2008].

  7. Effects of constrained arm swing on vertical center of mass displacement during walking.

    Science.gov (United States)

    Yang, Hyung Suk; Atkins, Lee T; Jensen, Daniel B; James, C Roger

    2015-10-01

    The purpose of this study was to determine the effects of constraining arm swing on the vertical displacement of the body's center of mass (COM) during treadmill walking and examine several common gait variables that may account for or mask differences in the body's COM motion with and without arm swing. Participants included 20 healthy individuals (10 male, 10 female; age: 27.8 ± 6.8 years). The body's COM displacement, first and second peak vertical ground reaction forces (VGRFs), and lowest VGRF during mid-stance, peak summed bilateral VGRF, lower extremity sagittal joint angles, stride length, and foot contact time were measured with and without arm swing during walking at 1.34 m/s. The body's COM displacement was greater with the arms constrained (arm swing: 4.1 ± 1.2 cm, arm constrained: 4.9 ± 1.2 cm, p reaction force data indicated that the COM displacement increased in both double limb and single limb stance. However, kinematic patterns visually appeared similar between conditions. Shortened stride length and foot contact time also were observed, although these do not seem to account for the increased COM displacement. However, a change in arm COM acceleration might have contributed to the difference. These findings indicate that a change in arm swing causes differences in vertical COM displacement, which could increase energy expenditure. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Procedures minimally invasive image-guided

    International Nuclear Information System (INIS)

    Mora Guevara, Alejandro

    2011-01-01

    A literature review focused on minimally invasive procedures, has been performed at the Department of Radiology at the Hospital Calderon Guardia. A multidisciplinary team has been raised for decision making. The materials, possible complications and the available imaging technique such as ultrasound, computed tomography, magnetic resonance imaging, have been determined according to the procedure to be performed. The revision has supported medical interventions didactically enjoying the best materials, resources and conditions for a successful implementation of procedures and results [es

  9. Construction schedules slack time minimizing

    Science.gov (United States)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  10. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    RNA polymerase (RNAP) and the DNA template must rotate relative to each other during transcription elongation. In the cell, however, the components of the transcription apparatus may be subject to rotary constraints. For instance, the DNA is divided into topological domains that are delineated...... of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...... constrained. We conclude that transcription of a natural bacterial gene may proceed with high efficiency despite the fact that newly synthesized RNA is entangled around the template in the narrow confines of torsionally constrained supercoiled DNA....

  11. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  12. Constraining the ensemble Kalman filter for improved streamflow forecasting

    Science.gov (United States)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also

  13. Non-unitary neutrino mixing and CP violation in the minimal inverse seesaw model

    International Nuclear Information System (INIS)

    Malinsky, Michal; Ohlsson, Tommy; Xing, Zhi-zhong; Zhang He

    2009-01-01

    We propose a simplified version of the inverse seesaw model, in which only two pairs of the gauge-singlet neutrinos are introduced, to interpret the observed neutrino mass hierarchy and lepton flavor mixing at or below the TeV scale. This 'minimal' inverse seesaw scenario (MISS) is technically natural and experimentally testable. In particular, we show that the effective parameters describing the non-unitary neutrino mixing matrix are strongly correlated in the MISS, and thus, their upper bounds can be constrained by current experimental data in a more restrictive way. The Jarlskog invariants of non-unitary CP violation are calculated, and the discovery potential of such new CP-violating effects in the near detector of a neutrino factory is discussed.

  14. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  15. Optimal replacement of residential air conditioning equipment to minimize energy, greenhouse gas emissions, and consumer cost in the US

    International Nuclear Information System (INIS)

    De Kleine, Robert D.; Keoleian, Gregory A.; Kelly, Jarod C.

    2011-01-01

    A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: → Optimal replacement schedules for residential central air conditioners were found. → Minimizing energy required more frequent replacement than minimizing consumer cost. → Significant variation in optimal replacement was observed for Michigan and Texas. → Rebates for altering replacement patterns are not cost effective for GHG abatement. → Maintenance levels were significant in determining the energy and GHG impacts.

  16. Constraining Lipid Biomarker Paleoclimate Proxies in a Small Arctic Watershed

    Science.gov (United States)

    Dion-Kirschner, H.; McFarlin, J. M.; Axford, Y.; Osburn, M. R.

    2017-12-01

    Arctic amplification of climate change renders high-latitude environments unusually sensitive to changes in climatic conditions (Serreze and Barry, 2011). Lipid biomarkers, and their hydrogen and carbon isotopic compositions, can yield valuable paleoclimatic and paleoecological information. However, many variables affect the production and preservation of lipids and their constituent isotopes, including precipitation, plant growth conditions, biosynthesis mechanisms, and sediment depositional processes (Sachse et al., 2012). These variables are particularly poorly constrained for high-latitude environments, where trees are sparse or not present, and plants grow under continuous summer light and cool temperatures during a short growing season. Here we present a source-to-sink study of a single watershed from the Kangerlussuaq region of southwest Greenland. Our analytes from in and around `Little Sugarloaf Lake' (LSL) include terrestrial and aquatic plants, plankton, modern lake water, surface sediments, and a sediment core. This diverse sample set allows us to fulfill three goals: 1) We evaluate the production of lipids and isotopic signatures in the modern watershed in comparison to modern climate. Our data exhibit genus-level trends in leaf wax production and isotopic composition, and help clarify the difference between terrestrial and aquatic signals. 2) We evaluate the surface sediment of LSL to determine how lipid biomarkers from the watershed are incorporated into sediments. We constrain the relative contributions of terrestrial plants, aquatic plants, and other aquatic organisms to the sediment in this watershed. 3) We apply this modern source-to-sink calibration to the analysis of a 65 cm sediment core record. Our core is organic-rich, and relatively high deposition rates allow us to reconstruct paleoenvironmental changes with high resolution. Our work will help determine the veracity of these common paleoclimate proxies, specifically for research in

  17. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  18. On the notion of Jacobi fields in constrained calculus of variations

    Directory of Open Access Journals (Sweden)

    Massa Enrico

    2016-12-01

    Full Text Available In variational calculus, the minimality of a given functional under arbitrary deformations with fixed end-points is established through an analysis of the so called second variation. In this paper, the argument is examined in the context of constrained variational calculus, assuming piecewise differentiable extremals, commonly referred to as extremaloids. The approach relies on the existence of a fully covariant representation of the second variation of the action functional, based on a family of local gauge transformations of the original Lagrangian and on a set of scalar attributes of the extremaloid, called the corners' strengths [16]. In dis- cussing the positivity of the second variation, a relevant role is played by the Jacobi fields, defined as infinitesimal generators of 1-parameter groups of diffeomorphisms preserving the extremaloids. Along a piecewise differentiable extremal, these fields are generally discontinuous across the corners. A thorough analysis of this point is presented. An alternative characterization of the Jacobi fields as solutions of a suitable accessory variational problem is established.

  19. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  20. 21 CFR 888.3300 - Hip joint metal constrained cemented or uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal constrained cemented or uncemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3300 Hip joint metal constrained cemented or uncemented prosthesis. (a) Identification. A hip joint metal constrained...

  1. Conditioned pain modulation is minimally influenced by cognitive evaluation or imagery of the conditioning stimulus

    Directory of Open Access Journals (Sweden)

    Bernaba M

    2014-11-01

    Full Text Available Mario Bernaba, Kevin A Johnson, Jiang-Ti Kong, Sean MackeyStanford Systems Neuroscience and Pain Laboratory, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USAPurpose: Conditioned pain modulation (CPM is an experimental approach for probing endogenous analgesia by which one painful stimulus (the conditioning stimulus may inhibit the perceived pain of a subsequent stimulus (the test stimulus. Animal studies suggest that CPM is mediated by a spino–bulbo–spinal loop using objective measures such as neuronal firing. In humans, pain ratings are often used as the end point. Because pain self-reports are subject to cognitive influences, we tested whether cognitive factors would impact on CPM results in healthy humans.Methods: We conducted a within-subject, crossover study of healthy adults to determine the extent to which CPM is affected by 1 threatening and reassuring evaluation and 2 imagery alone of a cold conditioning stimulus. We used a heat stimulus individualized to 5/10 on a visual analog scale as the testing stimulus and computed the magnitude of CPM by subtracting the postconditioning rating from the baseline pain rating of the heat stimulus.Results: We found that although evaluation can increase the pain rating of the conditioning stimulus, it did not significantly alter the magnitude of CPM. We also found that imagery of cold pain alone did not result in statistically significant CPM effect.Conclusion: Our results suggest that CPM is primarily dependent on sensory input, and that the cortical processes of evaluation and imagery have little impact on CPM. These findings lend support for CPM as a useful tool for probing endogenous analgesia through subcortical mechanisms.Keywords: conditioned pain modulation, endogenous analgesia, evaluation, imagery, cold presser test, CHEPS, contact heat-evoked potential stimulator

  2. A Variant of the Topkis-Veinott Method for Solving Inequality Constrained Optimization Problems

    International Nuclear Information System (INIS)

    Birge, J. R.; Qi, L.; Wei, Z.

    2000-01-01

    In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {z } , f 0 (y) ≠ f 0 (z) , where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm

  3. Q-deformed systems and constrained dynamics

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1993-01-01

    It is shown that quantum theories of the q-deformed harmonic oscillator and one-dimensional free q-particle (a free particle on the 'quantum' line) can be obtained by the canonical quantization of classical Hamiltonian systems with commutative phase-space variables and a non-trivial symplectic structure. In the framework of this approach, classical dynamics of a particle on the q-line coincides with the one of a free particle with friction. It is argued that q-deformed systems can be treated as ordinary mechanical systems with the second-class constraints. In particular, second-class constrained systems corresponding to the q-oscillator and q-particle are given. A possibility of formulating q-deformed systems via gauge theories (first-class constrained systems) is briefly discussed. (orig.)

  4. 21 CFR 888.3110 - Ankle joint metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/polymer semi-constrained... Ankle joint metal/polymer semi-constrained cemented prosthesis. (a) Identification. An ankle joint metal/polymer semi-constrained cemented prosthesis is a device intended to be implanted to replace an ankle...

  5. Constrained physical therapist practice: an ethical case analysis of recommending discharge placement from the acute care setting.

    Science.gov (United States)

    Nalette, Ernest

    2010-06-01

    Constrained practice is routinely encountered by physical therapists and may limit the physical therapist's primary moral responsibility-which is to help the patient to become well again. Ethical practice under such conditions requires a certain moral character of the practitioner. The purposes of this article are: (1) to provide an ethical analysis of a typical patient case of constrained clinical practice, (2) to discuss the moral implications of constrained clinical practice, and (3) to identify key moral principles and virtues fostering ethical physical therapist practice. The case represents a common scenario of discharge planning in acute care health facilities in the northeastern United States. An applied ethics approach was used for case analysis. The decision following analysis of the dilemma was to provide the needed care to the patient as required by compassion, professional ethical standards, and organizational mission. Constrained clinical practice creates a moral dilemma for physical therapists. Being responsive to the patient's needs moves the physical therapist's practice toward the professional ideal of helping vulnerable patients become well again. Meeting the patient's needs is a professional requirement of the physical therapist as moral agent. Acting otherwise requires an alternative position be ethically justified based on systematic analysis of a particular case. Skepticism of status quo practices is required to modify conventional individual, organizational, and societal practices toward meeting the patient's best interest.

  6. The Karush–Kuhn–Tucker optimality conditions in minimum weight design of elastic rotating disks with variable thickness and density

    Directory of Open Access Journals (Sweden)

    Sanaz Jafari

    2011-10-01

    Full Text Available Rotating discs work mostly at high angular velocity. High speed results in large centrifugal forces in discs and induces large stresses and deformations. Minimizing weight of such disks yields various benefits such as low dead weights and lower costs. In order to attain a certain and reliable analysis, disk with variable thickness and density is considered. Semi-analytical solutions for the elastic stress distribution in rotating annular disks with uniform and variable thicknesses and densities are obtained under plane stress assumption by authors in previous works. The optimum disk profile for minimum weight design is achieved by the Karush–Kuhn–Tucker (KKT optimality conditions. Inequality constrain equation is used in optimization to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk.

  7. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  8. Power Conditioning And Distribution Units For 50V Platforms A Flexible And Modular Concept Allowing To Deal With Time Constraining Programs

    Science.gov (United States)

    Lempereur, V.; Liegeois, B.; Deplus, N.

    2011-10-01

    In the frame of its Power Conditioning and Distribution Unit (PCDU) Medium power product family, Thales Alenia space ETCA is currently developing Power Conditioning Unit (PCU) and PCDU products for 50V platforms applications. These developments are performed in very schedule constraining programs. This challenge can be met thanks to the modular PCDU concept allowing to share a common heritage at mechanical & thermal points of view as well as at electrical functions level. First Medium power PCDU application has been developed for Herschel-Planck PCDU and re-used in several other missions (e.g. GlobalStar2 PCDU for which we are producing more than 26 units). Based on this heritage, a development plan based on Electrical Model (EM) (avoiding Electrical Qualification Model - EQM) can be proposed when the mechanical qualification of the concept covers the environment required in new projects. This first heritage level allows reducing development schedule and activities. In addition, development is also optimized thanks to the re-use of functions designed and qualified in Herschel- PlanckPCDU. This coversinternal TM/TC management inside PCDU based on a centralized scheduler and an internal high speed serial bus. Finally, thanks to common architecture of several 50V platforms based on full regulated bus, S3R (Sequential Shunt Switch Regulator) concept and one (or two) Li- Ion battery(ies), a common PCU/PCDU architecture has allowed the development of modules or functions that are used in several applications. These achievements are discussed with particular emphasis on PCDU architecture trade-offs allowing flexibility of proposed technical solutions (w.r.t. mono/bi-battery configurations, SA inner capacitance value, output power needs...). Pro's and con's of sharing concepts and designs between several applications on 50V platforms are also be discussed.

  9. On a minimization of the eigenvalues of Schroedinger operator relatively domains

    International Nuclear Information System (INIS)

    Gasymov, Yu.S.; Niftiev, A.A.

    2001-01-01

    Minimization of the eigenvalues plays an important role in the operators spectral theory. The problem on the minimization of the eigenvalues of the Schroedinger operator by areas is considered in this work. The algorithm, analogous to the conditional gradient method, is proposed for the numerical solution of this problem in the common case. The result is generalized for the case of the positively determined completely continuous operator [ru

  10. [Minimally invasive coronary artery surgery].

    Science.gov (United States)

    Zalaquett, R; Howard, M; Irarrázaval, M J; Morán, S; Maturana, G; Becker, P; Medel, J; Sacco, C; Lema, G; Canessa, R; Cruz, F

    1999-01-01

    There is a growing interest to perform a left internal mammary artery (LIMA) graft to the left anterior descending coronary artery (LAD) on a beating heart through a minimally invasive access to the chest cavity. To report the experience with minimally invasive coronary artery surgery. Analysis of 11 patients aged 48 to 79 years old with single vessel disease that, between 1996 and 1997, had a LIMA graft to the LAD performed through a minimally invasive left anterior mediastinotomy, without cardiopulmonary bypass. A 6 to 10 cm left parasternal incision was done. The LIMA to the LAD anastomosis was done after pharmacological heart rate and blood pressure control and a period of ischemic pre conditioning. Graft patency was confirmed intraoperatively by standard Doppler techniques. Patients were followed for a mean of 11.6 months (7-15 months). All patients were extubated in the operating room and transferred out of the intensive care unit on the next morning. Seven patients were discharged on the third postoperative day. Duplex scanning confirmed graft patency in all patients before discharge; in two patients, it was confirmed additionally by arteriography. There was no hospital mortality, no perioperative myocardial infarction and no bleeding problems. After follow up, ten patients were free of angina, in functional class I and pleased with the surgical and cosmetic results. One patient developed atypical angina on the seventh postoperative month and a selective arteriography confirmed stenosis of the anastomosis. A successful angioplasty of the original LAD lesion was carried out. A minimally invasive left anterior mediastinotomy is a good surgical access to perform a successful LIMA to LAD graft without cardiopulmonary bypass, allowing a shorter hospital stay and earlier postoperative recovery. However, a larger experience and a longer follow up is required to define its role in the treatment of coronary artery disease.

  11. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  12. A Heuristic Algorithm for Constrain Single-Source Problem with Constrained Customers

    Directory of Open Access Journals (Sweden)

    S. A. Raisi Dehkordi∗

    2012-09-01

    Full Text Available The Fermat-Weber location problem is to find a point in R n that minimizes the sum of the weighted Euclidean distances from m given points in R n . In this paper we consider the Fermat-Weber problem of one new facilitiy with respect to n unknown customers in order to minimizing the sum of transportation costs between this facility and the customers. We assumed that each customer is located in a nonempty convex closed bounded subset of R n .

  13. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Min

    2010-09-15

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  14. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    International Nuclear Information System (INIS)

    Kim, Ju Min

    2010-09-01

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  15. Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Box-Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2014-01-01

    descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.

  16. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  17. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  18. Assessment of oscillatory stability constrained available transfer capability

    International Nuclear Information System (INIS)

    Jain, T.; Singh, S.N.; Srivastava, S.C.

    2009-01-01

    This paper utilizes a bifurcation approach to compute oscillatory stability constrained available transfer capability (ATC) in an electricity market having bilateral as well as multilateral transactions. Oscillatory instability in non-linear systems can be related to Hopf bifurcation. At the Hopf bifurcation, one pair of the critical eigenvalues of the system Jacobian reaches imaginary axis. A new optimization formulation, including Hopf bifurcation conditions, has been developed in this paper to obtain the dynamic ATC. An oscillatory stability based contingency screening index, which takes into account the impact of transactions on severity of contingency, has been utilized to identify critical contingencies to be considered in determining ATC. The proposed method has been applied for dynamic ATC determination on a 39-bus New England system and a practical 75-bus Indian system considering composite static load as well as dynamic load models. (author)

  19. Minimization and segregation of radioactive wastes

    International Nuclear Information System (INIS)

    1992-07-01

    The report will serve as one of a series of technical manuals providing reference material and direct know-how to staff in radioisotope user establishments and research centres in Member States without nuclear power and the associated range of complex waste management operations. Considerations are limited to the minimization and segregation of wastes, these being initial steps on which the efficiency of the whole waste management system depends. The minimization and segregation operations are examined in the context of the restricted quantities and predominantly shorter lived activities of wastes from nuclear research, production and usage of radioisotopes. Liquid and solid wastes only are considered in the report. Gaseous waste minimization and treatment are specialized subjects and are not examined in this document. Gaseous effluent treatment in facilities handling low and intermediate level radioactive materials has been already the subject of a detailed IAEA report. Management of spent sealed sources has specifically been covered in a previous manual. Conditioned sealed sources must be taken into account in segregation arrangements for interim storage and disposal where there are exceptional long lived highly radiotoxic isotopes, particularly radium or americium. These are unlikely ever to be suitable for shallow land burial along with the remaining wastes. 30 refs, 5 figs, 8 tabs

  20. Opportunity Loss Minimization and Newsvendor Behavior

    Directory of Open Access Journals (Sweden)

    Xinsheng Xu

    2017-01-01

    Full Text Available To study the decision bias in newsvendor behavior, this paper introduces an opportunity loss minimization criterion into the newsvendor model with backordering. We apply the Conditional Value-at-Risk (CVaR measure to hedge against the potential risks from newsvendor’s order decision. We obtain the optimal order quantities for a newsvendor to minimize the expected opportunity loss and CVaR of opportunity loss. It is proven that the newsvendor’s optimal order quantity is related to the density function of market demand when the newsvendor exhibits risk-averse preference, which is inconsistent with the results in Schweitzer and Cachon (2000. The numerical example shows that the optimal order quantity that minimizes CVaR of opportunity loss is bigger than expected profit maximization (EPM order quantity for high-profit products and smaller than EPM order quantity for low-profit products, which is different from the experimental results in Schweitzer and Cachon (2000. A sensitivity analysis of changing the operation parameters of the two optimal order quantities is discussed. Our results confirm that high return implies high risk, while low risk comes with low return. Based on the results, some managerial insights are suggested for the risk management of the newsvendor model with backordering.

  1. A Digital Hysteresis Current Control for Half-Bridge Inverters with Constrained Switching Frequency

    Directory of Open Access Journals (Sweden)

    Triet Nguyen-Van

    2017-10-01

    Full Text Available This paper proposes a new robustly adaptive hysteresis current digital control algorithm for half-bridge inverters, which plays an important role in electric power, and in various applications in electronic systems. The proposed control algorithm is assumed to be implemented on a high-speed Field Programmable Gate Array (FPGA circuit, using measured data with high sampling frequency. The hysteresis current band is computed in each switching modulation period based on both the current error and the negative half switching period during the previous modulation period, in addition to the conventionally used voltages measured at computation instants. The proposed control algorithm is derived by solving the optimization problem—where the switching frequency is always constrained at below the desired constant frequency—which is not guaranteed by the conventional method. The optimization problem also keeps the output current stable around the reference, and minimizes power loss. Simulation results show good performances of the proposed algorithm compared with the conventional one.

  2. Effect of non-local equilibrium on minimal thermal resistance porous layered systems

    International Nuclear Information System (INIS)

    Leblond, Genevieve; Gosselin, Louis

    2008-01-01

    In this paper, the cooling of a heat-generating surface by a stacking of porous media (e.g., metallic foam) through which fluid flows parallel to the surface is considered. A two-temperature model is proposed to account for non-local thermal equilibrium (non-LTE). A scale analysis is performed to determine temperatures profiles in the boundary layer regime. The hot spot temperature is minimized with respect to the three design variables of each layer: porosity, pore diameter, and material. Global cost and mass are constrained. The optimization is performed with a hybrid genetic algorithm (GA) including local search to enhance convergence and repeatability. Results demonstrate that the optimized stacks do not operate in LTE. Therefore, we show that assuming LTE might result in underestimation of the hot spot temperature, and into different final designs as well

  3. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  4. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  5. Volume-constrained optimization of magnetorheological and electrorheological valves and dampers

    Science.gov (United States)

    Rosenfeld, Nicholas C.; Wereley, Norman M.

    2004-12-01

    This paper presents a case study of magnetorheological (MR) and electrorheological (ER) valve design within a constrained cylindrical volume. The primary purpose of this study is to establish general design guidelines for volume-constrained MR valves. Additionally, this study compares the performance of volume-constrained MR valves against similarly constrained ER valves. Starting from basic design guidelines for an MR valve, a method for constructing candidate volume-constrained valve geometries is presented. A magnetic FEM program is then used to evaluate the magnetic properties of the candidate valves. An optimized MR valve is chosen by evaluating non-dimensional parameters describing the candidate valves' damping performance. A derivation of the non-dimensional damping coefficient for valves with both active and passive volumes is presented to allow comparison of valves with differing proportions of active and passive volumes. The performance of the optimized MR valve is then compared to that of a geometrically similar ER valve using both analytical and numerical techniques. An analytical equation relating the damping performances of geometrically similar MR and ER valves in as a function of fluid yield stresses and relative active fluid volume, and numerical calculations are provided to calculate each valve's damping performance and to validate the analytical calculations.

  6. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  7. Probing gravitational non-minimal coupling with dark energy surveys

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing (China); National Tsing Hua University, Department of Physics, Hsinchu (China); National Center for Theoretical Sciences, Hsinchu (China); Lee, Chung-Chi [National Center for Theoretical Sciences, Hsinchu (China); Wu, Yi-Peng [Academia Sinica, Institute of Physics, Taipei (China)

    2017-03-15

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G{sub eff} subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G{sub eff}/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G{sub eff}/G < 2.2 x 10{sup -5} when combining with Solar System tests. (orig.)

  8. Hoelder continuity of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, Taoufik

    2004-10-01

    The goal of the present paper is to establish some kind of regularity of an energy minimizer map between Riemannian polyhedra. More precisely, we will show the Hoelder continuity of local energy minimizers between Riemannian polyhedra with the target spaces without focal points. With this new result, we also complete our existence theorem obtained elsewhere, and consequently we generalize completely, to the case of target polyhedra without focal points (which is a weaker geometric condition than the nonpositivity of the curvature), the Eells-Fuglede's existence and regularity theorem which is the new version of the famous Eells-Sampson's theorem. (author)

  9. Loss Minimization Sliding Mode Control of IPM Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Mehran Zamanifar

    2010-01-01

    Full Text Available In this paper, a nonlinear loss minimization control strategy for an interior permanent magnet synchronous motor (IPMSM based on a newly developed sliding mode approach is presented. This control method sets force the speed control of the IPMSM drives and simultaneously ensures the minimization of the losses besides the uncertainties exist in the system such as parameter variations which have undesirable effects on the controller performance except at near nominal conditions. Simulation results are presented to show the effectiveness of the proposed controller.

  10. A discretized algorithm for the solution of a constrained, continuous ...

    African Journals Online (AJOL)

    A discretized algorithm for the solution of a constrained, continuous quadratic control problem. ... The results obtained show that the Discretized constrained algorithm (DCA) is much more accurate and more efficient than some of these techniques, particularly the FSA. Journal of the Nigerian Association of Mathematical ...

  11. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  12. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  13. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    Science.gov (United States)

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  14. A Risk-Constrained Multi-Stage Decision Making Approach to the Architectural Analysis of Mars Missions

    Science.gov (United States)

    Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)

    2012-01-01

    This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.

  15. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  16. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  17. THE PREDICTION OF pH BY GIBBS FREE ENERGY MINIMIZATION IN THE SUMP SOLUTION UNDER LOCA CONDITION OF PWR

    Directory of Open Access Journals (Sweden)

    HYOUNGJU YOON

    2013-02-01

    Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.

  18. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  19. Fast Lagrangian relaxation for constrained generation scheduling in a centralized electricity market

    International Nuclear Information System (INIS)

    Ongsakul, Weerakorn; Petcharaks, Nit

    2008-01-01

    This paper proposes a fast Lagrangian relaxation (FLR) for constrained generation scheduling (CGS) problem in a centralized electricity market. FLR minimizes the consumer payment rather than the total supply cost subject to the power balance, spinning reserve, transmission line, and generator operating constraints. FLR algorithm is improved by new initialization of Lagrangian multipliers and adaptive adjustment of Lagrangian multipliers. The adaptive subgradient method using high quality initial feasible multipliers requires much less number of iterations to converge, leading to a faster computational time. If congestion exists, the alleviating congestion index is proposed for congestion management. Finally, the unit decommitment is performed to prevent excessive spinning reserve. The FLR for CGS is tested on the 4 unit and the IEEE 24 bus reliability test systems. The proposed uniform electricity price results in a lower consumer payment than system marginal price based on uniformly fixed cost amortized allocation, non-uniform price, and electricity price incorporating side payment, leading to a lower electricity price. In addition, observations on objective functions, pricing scheme comparison and interpretation of Lagrangian multipliers are provided. (author)

  20. Multivariate constrained shape optimization: Application to extrusion bell shape for pasta production

    Science.gov (United States)

    Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco

    2017-10-01

    Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.

  1. On the optimal identification of tag sets in time-constrained RFID configurations.

    Science.gov (United States)

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  2. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  3. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael

    2015-01-01

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  4. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  5. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  6. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  7. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  8. Constrained customization of non-coplanar beam orientations in radiotherapy of brain tumours

    International Nuclear Information System (INIS)

    Rowbottom, C.G.; Oldham, M.; Webb, S.

    1999-01-01

    A methodology for the constrained customization of non-coplanar beam orientations in radiotherapy treatment planning has been developed and tested on a cohort of five patients with tumours of the brain. The methodology employed a combination of single and multibeam cost functions to produce customized beam orientations. The single-beam cost function was used to reduce the search space for the multibeam cost function, which was minimized using a fast simulated annealing algorithm. The scheme aims to produce well-spaced, customized beam orientations for each patient that produce low dose to organs at risk (OARs). The customized plans were compared with standard plans containing the number and orientation of beams chosen by a human planner. The beam orientation constraint-customized plans employed the same number of treatment beams as the standard plan but with beam orientations chosen by the constrained-customization scheme. Improvements from beam orientation constraint-customization were studied in isolation by customizing the beam weights of both plans using a dose-based downhill simplex algorithm. The results show that beam orientation constraint-customization reduced the maximum dose to the orbits by an average of 18.8 (±3.8, 1SD)% and to the optic nerves by 11.4 (±4.8, 1SD)% with no degradation of the planning target volume (PTV) dose distribution. The mean doses, averaged over the patient cohort, were reduced by 4.2 (±1.1, 1SD)% and 12.4 (±3.1 1SD)% for the orbits and optic nerves respectively. In conclusion, the beam orientation constraint-customization can reduce the dose to OARs, for few-beam treatment plans, when compared with standard treatment plans developed by a human planner. (author)

  9. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  10. Development of a minimal saponin vaccine adjuvant based on QS-21

    Science.gov (United States)

    Fernández-Tejada, Alberto; Chea, Eric K.; George, Constantine; Pillarsetty, Nagavarakishore; Gardner, Jeffrey R.; Livingston, Philip O.; Ragupathi, Govind; Lewis, Jason S.; Tan, Derek S.; Gin, David Y.

    2014-07-01

    Adjuvants are materials added to vaccines to enhance the immunological response to an antigen. QS-21 is a natural product adjuvant under investigation in numerous vaccine clinical trials, but its use is constrained by scarcity, toxicity, instability and an enigmatic molecular mechanism of action. Herein we describe the development of a minimal QS-21 analogue that decouples adjuvant activity from toxicity and provides a powerful platform for mechanistic investigations. We found that the entire branched trisaccharide domain of QS-21 is dispensable for adjuvant activity and that the C4-aldehyde substituent, previously proposed to bind covalently to an unknown cellular target, is also not required. Biodistribution studies revealed that active adjuvants were retained preferentially at the injection site and the nearest draining lymph nodes compared with the attenuated variants. Overall, these studies have yielded critical insights into saponin structure-function relationships, provided practical synthetic access to non-toxic adjuvants, and established a platform for detailed mechanistic studies.

  11. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  12. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  13. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    Science.gov (United States)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  14. On a Volume Constrained for the First Eigenvalue of the P-Laplacian Operator

    International Nuclear Information System (INIS)

    Ly, Idrissa

    2009-10-01

    In this paper, we are interested in a shape optimization problem which consists in minimizing the functional that associates to an open set the first eigenvalue for p-Laplacian operator with homogeneous boundary condition. The minimum is taken among all open subsets with prescribed measure of a given bounded domain. We study an existence result for the associate variational problem. Our technique consists in enlarging the class of admissible functions to the whole space W 0 1,p (D), penalizing those functions whose level sets have a measure which is less than those required. In fact, we study the minimizers of a family of penalized functionals J λ , λ > 0 showing they are Hoelder continuous. And we prove that such functions minimize the initial problem provided the penalization parameter λ is large enough. (author)

  15. Slamming Simulations in a Conditional Wave

    DEFF Research Database (Denmark)

    Seng, Sopheak; Jensen, Jørgen Juncher

    2012-01-01

    A study of slamming events in conditional waves is presented in this paper. The ship is sailing in head sea and the motion is solved for under the assumption of rigid body motion constrained to two degree-of-freedom i.e. heave and pitch. Based on a time domain non-linear strip theory most probable...

  16. Surface states of a system of Dirac fermions: A minimal model

    International Nuclear Information System (INIS)

    Volkov, V. A.; Enaldiev, V. V.

    2016-01-01

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  17. Surface states of a system of Dirac fermions: A minimal model

    Energy Technology Data Exchange (ETDEWEB)

    Volkov, V. A., E-mail: volkov.v.a@gmail.com; Enaldiev, V. V. [Russian Academy of Sciences, Kotel’nikov Institute of Radio Engineering and Electronics (Russian Federation)

    2016-03-15

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  18. Evolution of quality characteristics of minimally processed asparagus during storage in different lighting conditions.

    Science.gov (United States)

    Sanz, S; Olarte, C; Ayala, F; Echávarri, J F

    2009-08-01

    The effect of different types of lighting (white, green, red, and blue light) on minimally processed asparagus during storage at 4 degrees C was studied. The gas concentrations in the packages, pH, mesophilic counts, and weight loss were also determined. Lighting caused an increase in physiological activity. Asparagus stored under lighting achieved atmospheres with higher CO(2) and lower O(2) content than samples kept in the dark. This activity increase explains the greater deterioration experienced by samples stored under lighting, which clearly affected texture and especially color, accelerating the appearance of greenish hues in the tips and reddish-brown hues in the spears. Exposure to light had a negative effect on the quality parameters of the asparagus and it caused a significant reduction in shelf life. Hence, the 11 d shelf life of samples kept in the dark was reduced to only 3 d in samples kept under red and green light, and to 7 d in those kept under white and blue light. However, quality indicators such as the color of the tips and texture showed significantly better behavior under blue light than with white light, which allows us to state that it is better to use this type of light or blue-tinted packaging film for the display of minimally processed asparagus to consumers.

  19. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  20. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Directory of Open Access Journals (Sweden)

    Thadeous J Kacmarczyk

    Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  1. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    Science.gov (United States)

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  2. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Science.gov (United States)

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  3. Adler's Zero Condition and a Minimally Symmetric Higgs Boson.

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Long ago Coleman, Callan, Wess and Zumino (CCWZ) constructed the nonlinear sigma model lagrangian based on a general coset G/H. I discuss how CCWZ lagrangian can be (re)derived using only IR data, by imposing Adler's zero condition in conjunction with the unbroken symmetry group H. Applying the technique to the case of composite Higgs models allows one to derive a universal lagrangian for all models where the Higgs arises as a pseudo-Nambu-Goldston boson, up to symmetry-breaking effects.

  4. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  5. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  6. Minimization of heatwave morbidity and mortality.

    Science.gov (United States)

    Kravchenko, Julia; Abernethy, Amy P; Fawzy, Maria; Lyerly, H Kim

    2013-03-01

    Global climate change is projected to increase the frequency and duration of periods of extremely high temperatures. Both the general populace and public health authorities often underestimate the impact of high temperatures on human health. To highlight the vulnerable populations and illustrate approaches to minimization of health impacts of extreme heat, the authors reviewed the studies of heat-related morbidity and mortality for high-risk populations in the U.S. and Europe from 1958 to 2012. Heat exposure not only can cause heat exhaustion and heat stroke but also can exacerbate a wide range of medical conditions. Vulnerable populations, such as older adults; children; outdoor laborers; some racial and ethnic subgroups (particularly those with low SES); people with chronic diseases; and those who are socially or geographically isolated, have increased morbidity and mortality during extreme heat. In addition to ambient temperature, heat-related health hazards are exacerbated by air pollution, high humidity, and lack of air-conditioning. Consequently, a comprehensive approach to minimize the health effects of extreme heat is required and must address educating the public of the risks and optimizing heatwave response plans, which include improving access to environmentally controlled public havens, adaptation of social services to address the challenges required during extreme heat, and consistent monitoring of morbidity and mortality during periods of extreme temperatures. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  7. Performance potential of mechanical ventilation systems with minimized pressure loss

    DEFF Research Database (Denmark)

    Terkildsen, Søren; Svendsen, Svend

    2013-01-01

    simulations that quantify fan power consumption, heating demand and indoor environmental conditions. The system was designed with minimal pressure loss in the duct system and heat exchanger. Also, it uses state-of-the-art components such as electrostatic precipitators, diffuse ceiling inlets and demand......In many locations mechanical ventilation has been the most widely used principle of ventilation over the last 50 years but the conventional system design must be revised to comply with future energy requirements. This paper examines the options and describes a concept for the design of mechanical...... ventilation systems with minimal pressure loss and minimal energy use. This can provide comfort ventilation and avoid overheating through increased ventilation and night cooling. Based on this concept, a test system was designed for a fictive office building and its performance was documented using building...

  8. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wei, J [City College of New York, New York, NY (United States); Chao, M [The Mount Sinai Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associated algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately

  9. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization

    International Nuclear Information System (INIS)

    Wei, J; Chao, M

    2016-01-01

    Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associated algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately

  10. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  11. Dynamically constrained ensemble perturbations – application to tides on the West Florida Shelf

    Directory of Open Access Journals (Sweden)

    F. Lenartz

    2009-07-01

    Full Text Available A method is presented to create an ensemble of perturbations that satisfies linear dynamical constraints. A cost function is formulated defining the probability of each perturbation. It is shown that the perturbations created with this approach take the land-sea mask into account in a similar way as variational analysis techniques. The impact of the land-sea mask is illustrated with an idealized configuration of a barrier island. Perturbations with a spatially variable correlation length can be also created by this approach. The method is applied to a realistic configuration of the West Florida Shelf to create perturbations of the M2 tidal parameters for elevation and depth-averaged currents. The perturbations are weakly constrained to satisfy the linear shallow-water equations. Despite that the constraint is derived from an idealized assumption, it is shown that this approach is applicable to a non-linear and baroclinic model. The amplitude of spurious transient motions created by constrained perturbations of initial and boundary conditions is significantly lower compared to perturbing the variables independently or to using only the momentum equation to compute the velocity perturbations from the elevation.

  12. Local climatic conditions constrain soil yeast diversity patterns in Mediterranean forests, woodlands and scrub biome.

    Science.gov (United States)

    Yurkov, Andrey M; Röhl, Oliver; Pontes, Ana; Carvalho, Cláudia; Maldonado, Cristina; Sampaio, José Paulo

    2016-02-01

    Soil yeasts represent a poorly known fraction of the soil microbiome due to limited ecological surveys. Here, we provide the first comprehensive inventory of cultivable soil yeasts in a Mediterranean ecosystem, which is the leading biodiversity hotspot for vascular plants and vertebrates in Europe. We isolated and identified soil yeasts from forested sites of Serra da Arrábida Natural Park (Portugal), representing the Mediterranean forests, woodlands and scrub biome. Both cultivation experiments and the subsequent species richness estimations suggest the highest species richness values reported to date, resulting in a total of 57 and 80 yeast taxa, respectively. These values far exceed those reported for other forest soils in Europe. Furthermore, we assessed the response of yeast diversity to microclimatic environmental factors in biotopes composed of the same plant species but showing a gradual change from humid broadleaf forests to dry maquis. We observed that forest properties constrained by precipitation level had strong impact on yeast diversity and on community structure and lower precipitation resulted in an increased number of rare species and decreased evenness values. In conclusion, the structure of soil yeast communities mirrors the environmental factors that affect aboveground phytocenoses, aboveground biomass and plant projective cover. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A Bayesian perspective on age replacement with minimal repair

    International Nuclear Information System (INIS)

    Sheu, S.-H.; Yeh, R.H.; Lin, Y.-B.; Juang, M.-G.

    1999-01-01

    In this article, a Bayesian approach is developed for determining an optimal age replacement policy with minimal repair. By incorporating minimal repair, planned replacement, and unplanned replacement, the mathematical formulas of the expected cost per unit time are obtained for two cases - the infinite-horizon case and the one-replacement-cycle case. For each case, we show that there exists a unique and finite optimal age for replacement under some reasonable conditions. When the failure density is Weibull with uncertain parameters, a Bayesian approach is established to formally express and update the uncertain parameters for determining an optimal age replacement policy. Further, various special cases are discussed in detail. Finally, a numerical example is given

  14. The properties of retrieval cues constrain the picture superiority effect.

    Science.gov (United States)

    Weldon, M S; Roediger, H L; Challis, B H

    1989-01-01

    In three experiments, we examined why pictures are remembered better than words on explicit memory tests like recall and recognition, whereas words produce more priming than pictures on some implicit tests, such as word-fragment and word-stem completion (e.g., completing -l-ph-nt or ele----- as elephant). One possibility is that pictures are always more accessible than words if subjects are given explicit retrieval instructions. An alternative possibility is that the properties of the retrieval cues themselves constrain the retrieval processes engaged; word fragments might induce data-driven (perceptually based) retrieval, which favors words regardless of the retrieval instructions. Experiment 1 demonstrated that words were remembered better than pictures on both the word-fragment and word-stem completion tasks under both implicit and explicit retrieval conditions. In Experiment 2, pictures were recalled better than words with semantically related extralist cues. In Experiment 3, when semantic cues were combined with word fragments, pictures and words were recalled equally well under explicit retrieval conditions, but words were superior to pictures under implicit instructions. Thus, the inherently data-limited properties of fragmented words limit their use in accessing conceptual codes. Overall, the results indicate that retrieval operations are largely determined by properties of the retrieval cues under both implicit and explicit retrieval conditions.

  15. Color constrains depth in da Vinci stereopsis for camouflage but not occlusion.

    Science.gov (United States)

    Wardle, Susan G; Gillam, Barbara J

    2013-12-01

    Monocular regions that occur with binocular viewing of natural scenes can produce a strong perception of depth--"da Vinci stereopsis." They occur either when part of the background is occluded in one eye, or when a nearer object is camouflaged against a background surface in one eye's view. There has been some controversy over whether da Vinci depth is constrained by geometric or ecological factors. Here we show that the color of the monocular region constrains the depth perceived from camouflage, but not occlusion, as predicted by ecological considerations. Quantitative depth was found in both cases, but for camouflage only when the color of the monocular region matched the binocular background. Unlike previous reports, depth failed even when nonmatching colors satisfied conditions for perceptual transparency. We show that placing a colored line at the boundary between the binocular and monocular regions is sufficient to eliminate depth from camouflage. When both the background and the monocular region contained vertical contours that could be fused, some observers appeared to use fusion, and others da Vinci constraints, supporting the existence of a separate da Vinci mechanism. The results show that da Vinci stereopsis incorporates color constraints and is more complex than previously assumed.

  16. How peer-review constrains cognition: on the frontline in the knowledge sector.

    Science.gov (United States)

    Cowley, Stephen J

    2015-01-01

    Peer-review is neither reliable, fair, nor a valid basis for predicting 'impact': as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far as 'cognition' describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as 'symbolizations', replicable patterns that use technologically enabled activity. On this bio-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper's own content came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper's knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in the knowledge sector in that it delimits what can count as knowing. Its systemic nature is therefore crucial to not only discipline-centered 'real' science but also its 'post-academic' counterparts.

  17. Sectors of solutions and minimal energies in classical Liouville theories for strings

    International Nuclear Information System (INIS)

    Johansson, L.; Kihlberg, A.; Marnelius, R.

    1984-01-01

    All classical solutions of the Liouville theory for strings having finite stable minimum energies are calculated explicitly together with their minimal energies. Our treatment automatically includes the set of natural solitonlike singularities described by Jorjadze, Pogrebkov, and Polivanov. Since the number of such singularities is preserved in time, a sector of solutions is not only characterized by its boundary conditions but also by its number of singularities. Thus, e.g., the Liouville theory with periodic boundary conditions has three different sectors of solutions with stable minimal energies containing zero, one, and two singularities. (Solutions with more singularities have no stable minimum energy.) It is argued that singular solutions do not make the string singular and therefore may be included in the string quantization

  18. Characterization of inclusions in terrestrial impact formed zircon: Constraining the formation conditions of Hadean zircon from Jack Hills, Western Australia

    Science.gov (United States)

    Faltys, J. P.; Wielicki, M. M.; Sizemore, T. M.

    2017-12-01

    , associated with impact formed zircon; however, if certain populations of the Jack Hills record appear to share inclusion assemblages with impact formed zircon, this could provide a tool to constrain the frequency and timing of large impactors on early Earth and their possible effects on conditions conducive for the origin of life.

  19. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  20. Probing gravitational non-minimal coupling with dark energy surveys

    International Nuclear Information System (INIS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Wu, Yi-Peng

    2017-01-01

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G_e_f_f subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G_e_f_f/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G_e_f_f/G < 2.2 x 10"-"5 when combining with Solar System tests. (orig.)

  1. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  2. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  3. Risk-constrained self-scheduling of a fuel and emission constrained power producer using rolling window procedure

    International Nuclear Information System (INIS)

    Kazempour, S. Jalal; Moghaddam, Mohsen Parsa

    2011-01-01

    This work addresses a relevant methodology for self-scheduling of a price-taker fuel and emission constrained power producer in day-ahead correlated energy, spinning reserve and fuel markets to achieve a trade-off between the expected profit and the risk versus different risk levels based on Markowitz's seminal work in the area of portfolio selection. Here, a set of uncertainties including price forecasting errors and available fuel uncertainty are considered. The latter uncertainty arises because of uncertainties in being called for reserve deployment in the spinning reserve market and availability of power plant. To tackle the price forecasting errors, variances of energy, spinning reserve and fuel prices along with their covariances which are due to markets correlation are taken into account using relevant historical data. In order to tackle available fuel uncertainty, a framework for self-scheduling referred to as rolling window is proposed. This risk-constrained self-scheduling framework is therefore formulated and solved as a mixed-integer non-linear programming problem. Furthermore, numerical results for a case study are discussed. (author)

  4. Environmental Conditions Constrain the Distribution and Diversity of Archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    Science.gov (United States)

    Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.

    2011-01-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.

  5. Environmental conditions constrain the distribution and diversity of archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    Science.gov (United States)

    Wang, Yanping; Boyd, Eric; Crane, Sharron; Lu-Irving, Patricia; Krabbenhoft, David; King, Susan; Dighton, John; Geesey, Gill; Barkay, Tamar

    2011-11-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient.

  6. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  7. Waste Minimization Policy at the Romanian Nuclear Power Plant

    International Nuclear Information System (INIS)

    Andrei, V.; Daian, I.

    2002-01-01

    The radioactive waste management system at Cernavoda Nuclear Power Plant (NPP) in Romania was designed to maintain acceptable levels of safety for workers and to protect human health and the environment from exposure to unacceptable levels of radiation. In accordance with terminology of the International Atomic Energy Agency (IAEA), this system consists of the ''pretreatment'' of solid and organic liquid radioactive waste, which may include part or all of the following activities: collection, handling, volume reduction (by an in-drum compactor, if appropriate), and storage. Gaseous and aqueous liquid wastes are managed according to the ''dilute and discharge'' strategy. Taking into account the fact that treatment/conditioning and disposal technologies are still not established, waste minimization at the source is a priority environmental management objective, while waste minimization at the disposal stage is presently just a theoretical requirement for future adopted technologies . The necessary operational and maintenance procedures are in place at Cernavoda to minimize the production and contamination of waste. Administrative and technical measures are established to minimize waste volumes. Thus, an annual environmental target of a maximum 30 m3 of radioactive waste volume arising from operation and maintenance has been established. Within the first five years of operations at Cernavoda NPP, this target has been met. The successful implementation of the waste minimization policy has been accompanied by a cost reduction while the occupational doses for plant workers have been maintained at as low as reasonably practicable levels. This paper will describe key features of the waste management system along with the actual experience that has been realized with respect to minimizing the waste volumes at the Cernavoda NPP

  8. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  9. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  10. Relationship between Air Pollution and Weather Conditions under Complicated Geographical conditions

    Science.gov (United States)

    Cheng, Q.; Jiang, P.; Li, M.

    2017-12-01

    Air pollution is one of the most serious issues all over the world, especially in megacities with constrained geographical conditions for air pollution diffusion. However, the dynamic mechanism of air pollution diffusion under complicated geographical conditions is still be confused. Researches to explore relationship between air pollution and weather conditions from the perspective of local atmospheric circulations can contribute more to solve such problem. We selected three megacities (Beijing, Shanghai and Guangzhou) under different geographical condition (mountain-plain transition region, coastal alluvial plain and coastal hilly terrain) to explore the relationship between air pollution and weather conditions. RDA (Redundancy analysis) model was used to analyze how the local atmospheric circulation acts on the air pollutant diffusion. The results show that there was a positive correlation between the concentration of air pollutants and air pressure, while temperature, precipitation and wind speed have negative correlations with the concentration of air pollutants. Furthermore, geographical conditions, such as topographic relief, have significant effects on the direction, path and intensity of local atmospheric circulation. As a consequence, air pollutants diffusion modes in different cities under various geographical conditions are diverse from each other.

  11. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  12. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....

  13. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  14. On the convergence of the dynamic series solution of a constrained ...

    African Journals Online (AJOL)

    The one dimensional problem of analysing the dynamic behaviour of an elevated water tower with elastic deflection–control device and subjected to a dynamic load was examined in [2]. The constrained elastic system was modeled as a column carrying a concentrated mass at its top and elastically constrained at a point ...

  15. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  16. Minimizing the Pervasiveness of Women's Personal Experiences of Gender Discrimination

    Science.gov (United States)

    Foster, Mindi D.; Jackson, Lydia C.; Hartmann, Ryan; Woulfe, Shannon

    2004-01-01

    Given the Rejection-Identification Model (Branscombe, Schmitt, & Harvey, 1999), which shows that perceiving discrimination to be pervasive is a negative experience, it was suggested that there would be conditions under which women would instead minimize the pervasiveness of discrimination. Study 1 (N= 91) showed that when women envisioned…

  17. Euclidean wormholes with minimally coupled scalar fields

    International Nuclear Information System (INIS)

    Ruz, Soumendranath; Modak, Bijan; Debnath, Subhra; Sanyal, Abhik Kumar

    2013-01-01

    A detailed study of quantum and semiclassical Euclidean wormholes for Einstein's theory with a minimally coupled scalar field has been performed for a class of potentials. Massless, constant, massive (quadratic in the scalar field) and inverse (linear) potentials admit the Hawking and Page wormhole boundary condition both in the classically forbidden and allowed regions. An inverse quartic potential has been found to exhibit a semiclassical wormhole configuration. Classical wormholes under a suitable back-reaction leading to a finite radius of the throat, where the strong energy condition is satisfied, have been found for the zero, constant, quadratic and exponential potentials. Treating such classical Euclidean wormholes as an initial condition, a late stage of cosmological evolution has been found to remain unaltered from standard Friedmann cosmology, except for the constant potential which under the back-reaction produces a term like a negative cosmological constant. (paper)

  18. Venus Surface Composition Constrained by Observation and Experiment

    Science.gov (United States)

    Gilmore, Martha; Treiman, Allan; Helbert, Jörn; Smrekar, Suzanne

    2017-11-01

    New observations from the Venus Express spacecraft as well as theoretical and experimental investigation of Venus analogue materials have advanced our understanding of the petrology of Venus melts and the mineralogy of rocks on the surface. The VIRTIS instrument aboard Venus Express provided a map of the southern hemisphere of Venus at ˜1 μm allowing, for the first time, the definition of surface units in terms of their 1 μm emissivity and derived mineralogy. Tessera terrain has lower emissivity than the presumably basaltic plains, consistent with a more silica-rich or felsic mineralogy. Thermodynamic modeling and experimental production of melts with Venera and Vega starting compositions predict derivative melts that range from mafic to felsic. Large volumes of felsic melts require water and may link the formation of tesserae to the presence of a Venus ocean. Low emissivity rocks may also be produced by atmosphere-surface weathering reactions unlike those seen presently. High 1 μm emissivity values correlate to stratigraphically recent flows and have been used with theoretical and experimental predictions of basalt weathering to identify regions of recent volcanism. The timescale of this volcanism is currently constrained by the weathering of magnetite (higher emissivity) in fresh basalts to hematite (lower emissivity) in Venus' oxidizing environment. Recent volcanism is corroborated by transient thermal anomalies identified by the VMC instrument aboard Venus Express. The interpretation of all emissivity data depends critically on understanding the composition of surface materials, kinetics of rock weathering and their measurement under Venus conditions. Extended theoretical studies, continued analysis of earlier spacecraft results, new atmospheric data, and measurements of mineral stability under Venus conditions have improved our understanding atmosphere-surface interactions. The calcite-wollastonite CO2 buffer has been discounted due, among other things, to

  19. Priority classes and weighted constrained equal awards rules for the claims problem

    DEFF Research Database (Denmark)

    Szwagrzak, Karol

    2015-01-01

    . They are priority-augmented versions of the standard weighted constrained equal awards rules, also known as weighted gains methods (Moulin, 2000): individuals are sorted into priority classes; the resource is distributed among the individuals in the first priority class using a weighted constrained equal awards...... rule; if some of the resource is left over, then it is distributed among the individuals in the second priority class, again using a weighted constrained equal awards rule; the distribution carries on in this way until the resource is exhausted. Our characterization extends to a generalized version...

  20. Chaotic improved PSO-based multi-objective optimization for minimization of power losses and L index in power systems

    International Nuclear Information System (INIS)

    Chen, Gonggui; Liu, Lilan; Song, Peizhu; Du, Yangwei

    2014-01-01

    Highlights: • New method for MOORPD problem using MOCIPSO and MOIPSO approaches. • Constrain-prior Pareto-dominance method is proposed to meet the constraints. • The limits of the apparent power flow of transmission line are considered. • MOORPD model is built up for MOORPD problem. • The achieved results by MOCIPSO and MOIPSO approaches are better than MOPSO method. - Abstract: Multi-objective optimal reactive power dispatch (MOORPD) seeks to not only minimize power losses, but also improve the stability of power system simultaneously. In this paper, the static voltage stability enhancement is achieved through incorporating L index in MOORPD problem. Chaotic improved PSO-based multi-objective optimization (MOCIPSO) and improved PSO-based multi-objective optimization (MOIPSO) approaches are proposed for solving complex multi-objective, mixed integer nonlinear problems such as minimization of power losses and L index in power systems simultaneously. In MOCIPSO and MOIPSO based optimization approaches, crossover operator is proposed to enhance PSO diversity and improve their global searching capability, and for MOCIPSO based optimization approach, chaotic sequences based on logistic map instead of random sequences is introduced to PSO for enhancing exploitation capability. In the two approaches, constrain-prior Pareto-dominance method (CPM) is proposed to meet the inequality constraints on state variables, the sorting and crowding distance methods are considered to maintain a well distributed Pareto optimal solutions, and moreover, fuzzy set theory is employed to extract the best compromise solution over the Pareto optimal curve. The proposed approaches have been examined and tested in the IEEE 30 bus and the IEEE 57 bus power systems. The performances of MOCIPSO, MOIPSO, and multi-objective PSO (MOPSO) approaches are compared with respect to multi-objective performance measures. The simulation results are promising and confirm the ability of MOCIPSO and

  1. Client's Constraining Factors to Construction Project Management

    African Journals Online (AJOL)

    factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...

  2. Numerical Estimation of Balanced and Falling States for Constrained Legged Systems

    Science.gov (United States)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.

    2017-08-01

    Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs

  3. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  4. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  5. Constraining the role of anoxygenic phototrophic Fe(II)-oxidizing bacteria in deposition of BIFs

    Science.gov (United States)

    Kappler, A.; Posth, N. R.; Hegler, F.; Wartha, E.; Huelin, S.

    2007-12-01

    Banded Iron Formations (BIFs) are Precambrian sedimentary deposits of alternating iron oxide and silica mineral layers. Their presence in the rock record ca.3.8-2.2 Ga makes them particularly intriguing formations for the debate over when oxygen became dominant on Earth. The mechanism(s) of BIF deposition is still unclear; suggestions including both abiotic and biotic processes. We are interested in constraining one of these proposed mechanisms; the direct biological oxidation of Fe(II) via anoxygenic Fe(II)-oxidizing autophototrophs. In order to find the limitations of photoferrotrophic BIF deposition, we take a holistic approach, investigating the oxidation of Fe(II) by modern Fe(II)-oxidizing phototrophs, the precipitation of Fe(III) (hydr)oxides, and the fate of the cell-mineral aggregates in the water column and at the basin floor. Specifically, physiology experiments with Fe(II)-oxidizing phototrophs under various conditions of light intensity, pH, Fe(II) concentration and temperature allow us to determine the environmental limits of such organisms. We carry out precipitation experiments to characterize the sedimentation rates, aggregate size and composition in order to resolve the effect of reactions in the water column. Finally, we simulate the diagenetic fate of these aggregates on the basin floor by placing them in gold capsules under T and P conditions relevant for the Transvaal Supergroup BIFs of South Africa. Recently, we have developed a tank simulating the Archean ocean in which the strains grow in continuous culture and collect the aggregates formed under various geochemical conditions. We aim to model the extent of and limitations to photoferrotrophs in BIF deposition. This information will help constrain whether biotic processes were dominant in the Archean ocean and will offer insight to the evolution of the early biogeosphere.

  6. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  7. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  8. A penalty method for PDE-constrained optimization in inverse problems

    International Nuclear Information System (INIS)

    Leeuwen, T van; Herrmann, F J

    2016-01-01

    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate. (paper)

  9. Florida Red Tide and Human Health: A Pilot Beach Conditions Reporting System to Minimize Human Exposure

    Science.gov (United States)

    Kirkpatrick, Barbara; Currier, Robert; Nierenberg, Kate; Reich, Andrew; Backer, Lorraine C.; Stumpf, Richard; Fleming, Lora; Kirkpatrick, Gary

    2008-01-01

    With over 50% of the US population living in coastal counties, the ocean and coastal environments have substantial impacts on coastal communities. While may of the impacts are positive, such as tourism and recreation opportunities, there are also negative impacts, such as exposure to harmful algal blooms (HABs) and water borne pathogens. Recent advances in environmental monitoring and weather prediction may allow us to forecast these potential adverse effects and thus mitigate the negative impact from coastal environmental threats. One example of the need to mitigate adverse environmental impacts occurs on Florida’s west coast, which experiences annual blooms, or periods of exuberant growth, of the toxic dinoflagellate, Karenia brevis. K. brevis produces a suite of potent neurotoxins called brevetoxins. Wind and wave action can break up the cells, releasing toxin that can then become part of the marine aerosol or sea spray. Brevetoxins in the aerosol cause respiratory irritation in people who inhale it. In addition, asthmatics who inhale the toxins report increase upper and lower airway lower symptoms and experience measurable changes in pulmonary function. Real-time reporting of the presence or absence of these toxic aerosols will allow asthmatics and local coastal residents to make informed decisions about their personal exposures, thus adding to their quality of life. A system to protect public health that combines information collected by an Integrated Ocean Observing System (IOOS) has been designed and implemented in Sarasota and Manatee Counties, Florida. This system is based on real-time reports from lifeguards at the eight public beaches. The lifeguards provide periodic subjective reports of the amount of dead fish on the beach, apparent level of respiratory irritation among beach-goers, water color, wind direction, surf condition, and the beach warning flag they are flying. A key component in the design of the observing system was an easy reporting

  10. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  11. Minimizing the Fluid Used to Induce Fracturing

    Science.gov (United States)

    Boyle, E. J.

    2015-12-01

    The less fluid injected to induce fracturing means less fluid needing to be produced before gas is produced. One method is to inject as fast as possible until the desired fracture length is obtained. Presented is an alternative injection strategy derived by applying optimal system control theory to the macroscopic mass balance. The picture is that the fracture is constant in aperture, fluid is injected at a controlled rate at the near end, and the fracture unzips at the far end until the desired length is obtained. The velocity of the fluid is governed by Darcy's law with larger permeability for flow along the fracture length. Fracture growth is monitored through micro-seismicity. Since the fluid is assumed to be incompressible, the rate at which fluid is injected is balanced by rate of fracture growth and rate of loss to bounding rock. Minimizing injected fluid loss to the bounding rock is the same as minimizing total injected fluid How to change the injection rate so as to minimize the total injected fluid is a problem in optimal control. For a given total length, the variation of the injected rate is determined by variations in overall time needed to obtain the desired fracture length, the length at any time, and the rate at which the fracture is growing at that time. Optimal control theory leads to a boundary condition and an ordinary differential equation in time whose solution is an injection protocol that minimizes the fluid used under the stated assumptions. That method is to monitor the rate at which the square of the fracture length is growing and adjust the injection rate proportionately.

  12. Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles

    Science.gov (United States)

    2014-03-27

    predicts forces and moments for the class of flapping wing fliers that makes up most insects and hummingbirds. Large bird and butterfly “clap- and...Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles DISSERTATION Garrison J. Lindholm, Captain, USAF AFIT-ENY-DS-14-M-02 DEPARTMENT...States Air Force, Department of Defense, or the United States Government. AFIT-ENY-DS-14-M-02 Closed-Loop Control of Constrained Flapping Wing Micro Air

  13. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  14. Constraining the noncommutative spectral action via astrophysical observations.

    Science.gov (United States)

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-03

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  15. Energy minimization in medical image analysis: Methodologies and applications.

    Science.gov (United States)

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  16. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  17. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  18. Stochastic frequency-security constrained scheduling of a microgrid considering price-driven demand response

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Anvari-Moghaddam, Amjad; Rashidizadeh-Kermani, Homa

    2018-01-01

    not only to maximize the expected profit of MG operator (MGO), but also to minimize the energy payments of customers. To study the effect of uncertain parameters and demand-side participation on system operating conditions, an AC-optimal power flow (AC-OPF) approach is also applied. The proposed stochastic...

  19. Running non-minimal inflation with stabilized inflaton potential

    Energy Technology Data Exchange (ETDEWEB)

    Okada, Nobuchika; Raut, Digesh [University of Alabama, Department of Physics and Astronomy, Alabama (United States)

    2017-04-15

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ{sup 4} inflation with non-minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. In order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but also correlations between the inflationary predictions and the mass spectrum. For concrete discussion, we investigate the minimal B-L extension of the standard model with identification of the B-L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B-L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. In other words, if the B-L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions. (orig.)

  20. Constraining Proton Lifetime in SO(10) with Stabilized Doublet-Triplet Splitting

    Energy Technology Data Exchange (ETDEWEB)

    Babu, K.S.; /Oklahoma State U.; Pati, Jogesh C.; /SLAC; Tavartkiladze, Zurab; /Oklahoma State U. /Tbilisi, Inst. Phys.

    2011-06-28

    We present a class of realistic unified models based on supersymmetric SO(10) wherein issues related to natural doublet-triplet (DT) splitting are fully resolved. Using a minimal set of low dimensional Higgs fields which includes a single adjoint, we show that the Dimopoulos-Wilzcek mechanism for DT splitting can be made stable in the presence of all higher order operators without having pseudo-Goldstone bosons and flat directions. The {mu} term of order TeV is found to be naturally induced. A Z{sub 2}-assisted anomalous U(1){sub A} gauge symmetry plays a crucial role in achieving these results. The threshold corrections to {alpha}{sub 3}(M{sub Z}), somewhat surprisingly, are found to be controlled by only a few effective parameters. This leads to a very predictive scenario for proton decay. As a novel feature, we find an interesting correlation between the d = 6 (p {yields} e{sup +}{pi}{sup 0}) and d = 5 (p {yields} {bar {nu}}K{sup +}) decay amplitudes which allows us to derive a constrained upper limit on the inverse rate of the e{sup +}{pi}{sup 0} mode. Our results show that both modes should be observed with an improvement in the current sensitivity by about a factor of five to ten.

  1. Molecular mechanics calculations of proteins. Comparison of different energy minimization strategies

    DEFF Research Database (Denmark)

    Christensen, I T; Jørgensen, Flemming Steen

    1997-01-01

    A general strategy for performing energy minimization of proteins using the SYBYL molecular modelling program has been developed. The influence of several variables including energy minimization procedure, solvation, dielectric function and dielectric constant have been investigated in order...... to develop a general method, which is capable of producing high quality protein structures. Avian pancreatic polypeptide (APP) and bovine pancreatic phospholipase A2 (BP PLA2) were selected for the calculations, because high quality X-ray structures exist and because all classes of secondary structure...... for this protein. Energy minimized structures of the trimeric PLA2 from Indian cobra (N.n.n. PLA2) were used for assessing the impact of protein-protein interactions. Based on the above mentioned criteria, it could be concluded that using the following conditions: Dielectric constant epsilon = 4 or 20; a distance...

  2. Supply curve bidding of electricity in constrained power networks

    Energy Technology Data Exchange (ETDEWEB)

    Al-Agtash, Salem Y. [Hijjawi Faculty of Engineering; Yarmouk University; Irbid 21163 (Jordan)

    2010-07-15

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  3. Supply curve bidding of electricity in constrained power networks

    International Nuclear Information System (INIS)

    Al-Agtash, Salem Y.

    2010-01-01

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  4. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...

  5. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Søndergaard, Hans; Ravn, Anders P.

    2013-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating: (1) a lean virtual machine (HVM) without any external dependencies on POSIX-like libraries or other OS functionalities, (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables, and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  6. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  7. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  8. 21 CFR 888.3350 - Hip joint metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer semi-constrained cemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3350 Hip joint metal/polymer semi-constrained cemented prosthesis. (a) Identification. A hip joint metal/polymer semi...

  9. 21 CFR 888.3120 - Ankle joint metal/polymer non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/polymer non-constrained cemented... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3120 Ankle joint metal/polymer non-constrained cemented prosthesis. (a) Identification. An ankle joint metal/polymer non...

  10. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  11. Inspection of feasible calibration conditions for UV radiometer detectors with the KI/KIO3 actinometer.

    Science.gov (United States)

    Qiang, Zhimin; Li, Wentao; Li, Mengkai; Bolton, James R; Qu, Jiuhui

    2015-01-01

    UV radiometers are widely employed for irradiance measurements, but their periodical calibrations not only induce an extra cost but also are time-consuming. In this study, the KI/KIO3 actinometer was applied to calibrate UV radiometer detectors at 254 nm with a quasi-collimated beam apparatus equipped with a low-pressure UV lamp, and feasible calibration conditions were identified. Results indicate that a washer constraining the UV light was indispensable, while the size (10 or 50 mL) of a beaker containing the actinometer solution had little influence when a proper washer was used. The absorption or reflection of UV light by the internal beaker wall led to an underestimation or overestimation of the irradiance determined by the KI/KIO3 actinometer, respectively. The proper range of the washer internal diameter could be obtained via mathematical analysis. A radiometer with a longer service time showed a greater calibration factor. To minimize the interference from the inner wall reflection of the collimating tube, calibrations should be conducted at positions far enough away from the tube bottom. This study demonstrates that after the feasible calibration conditions are identified, the KI/KIO3 actinometer can be applied readily to calibrate UV radiometer detectors at 254 nm. © 2014 The American Society of Photobiology.

  12. How peer-review constrains cognition: on the frontline in the knowledge sector

    Science.gov (United States)

    Cowley, Stephen J.

    2015-01-01

    Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far as ‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper’s own content came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper’s knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in the knowledge sector in that it delimits what can count as knowing. Its systemic nature is therefore crucial to not only discipline-centered ‘real’ science but also its ‘post-academic’ counterparts. PMID:26579064

  13. How peer review constrains cognition: on the frontline in the knowledge sector

    Directory of Open Access Journals (Sweden)

    Stephen John Cowley

    2015-11-01

    Full Text Available Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Given this consensus, I propose another framing: while a normative social process, peer-review also shapes the flexible behavior called ‘writing’ a scientific paper. In so far as ‘cognition’ describes the enabling conditions for flexible behaviour, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper’s own content came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper’s knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in the knowledge sector in that it delimits what can count as knowing. Its systemic nature is therefore crucial to not only discipline-centered ‘real’ science but also its ‘post-academic’ counterparts.

  14. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  15. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  16. A hybrid electromagnetism-like algorithm for a multi-mode resource-constrained project scheduling problem

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Sadeghi

    2013-08-01

    Full Text Available In this paper, two different sub-problems are considered to solve a resource constrained project scheduling problem (RCPSP, namely i assignment of modes to tasks and ii scheduling of these tasks in order to minimize the makespan of the project. The modified electromagnetism-like algorithm deals with the first problem to create an assignment of modes to activities. This list is used to generate a project schedule. When a new assignment is made, it is necessary to fix all mode dependent requirements of the project activities and to generate a random schedule with the serial SGS method. A local search will optimize the sequence of the activities. Also in this paper, a new penalty function has been proposed for solutions which are infeasible with respect to non-renewable resources. Performance of the proposed algorithm has been compared with the best algorithms published so far on the basis of CPU time and number of generated schedules stopping criteria. Reported results indicate excellent performance of the algorithm.

  17. On-shell constrained M 2 variables with applications to mass measurements and topology disambiguation

    Science.gov (United States)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2014-08-01

    We consider a class of on-shell constrained mass variables that are 3+1 dimensional generalizations of the Cambridge M T2 variable and that automatically incorporate various assumptions about the underlying event topology. The presence of additional on-shell constraints causes their kinematic distributions to exhibit sharper endpoints than the usual M T2 distribution. We study the mathematical properties of these new variables, e.g., the uniqueness of the solution selected by the minimization over the invisible particle 4-momenta. We then use this solution to reconstruct the masses of various particles along the decay chain. We propose several tests for validating the assumed event topology in missing energy events from new physics. The tests are able to determine: 1) whether the decays in the event are two-body or three-body, 2) if the decay is two-body, whether the intermediate resonances in the two decay chains are the same, and 3) the exact sequence in which the visible particles are emitted from each decay chain.

  18. Tracking error constrained robust adaptive neural prescribed performance control for flexible hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Zhonghua Wu

    2017-02-01

    Full Text Available A robust adaptive neural control scheme based on a back-stepping technique is developed for the longitudinal dynamics of a flexible hypersonic flight vehicle, which is able to ensure the state tracking error being confined in the prescribed bounds, in spite of the existing model uncertainties and actuator constraints. Minimal learning parameter technique–based neural networks are used to estimate the model uncertainties; thus, the amount of online updated parameters is largely lessened, and the prior information of the aerodynamic parameters is dispensable. With the utilization of an assistant compensation system, the problem of actuator constraint is overcome. By combining the prescribed performance function and sliding mode differentiator into the neural back-stepping control design procedure, a composite state tracking error constrained adaptive neural control approach is presented, and a new type of adaptive law is constructed. As compared with other adaptive neural control designs for hypersonic flight vehicle, the proposed composite control scheme exhibits not only low-computation property but also strong robustness. Finally, two comparative simulations are performed to demonstrate the robustness of this neural prescribed performance controller.

  19. Maintaining reduced noise levels in a resource-constrained neonatal intensive care unit by operant conditioning.

    Science.gov (United States)

    Ramesh, A; Denzil, S B; Linda, R; Josephine, P K; Nagapoornima, M; Suman Rao, P N; Swarna Rekha, A

    2013-03-01

    To evaluate the efficacy of operant conditioning in sustaining reduced noise levels in the neonatal intensive care unit (NICU). Quasi-experimental study on quality of care. Level III NICU of a teaching hospital in south India. 26 staff employed in the NICU. (7 Doctors, 13 Nursing staff and 6 Nursing assistants). Operant conditioning of staff activity for 6 months. This method involves positive and negative reinforcement to condition the staff to modify noise generating activities. Comparing noise levels in decibel: A weighted [dB (A)] before conditioning with levels at 18 and 24 months after conditioning. Decibel: A weighted accounts for noise that is audible to human ears. Operant conditioning for 6 months sustains the reduced noise levels to within 62 dB in ventilator room 95% CI: 60.4 - 62.2 and isolation room (95% CI: 55.8 - 61.5). In the preterm room, noise can be maintained within 52 dB (95% CI: 50.8 - 52.6). This effect is statistically significant in all the rooms at 18 months (P = 0.001). At 24 months post conditioning there is a significant rebound of noise levels by 8.6, 6.7 and 9.9 dB in the ventilator, isolation and preterm room, respectively (P =0.001). Operant conditioning for 6 months was effective in sustaining reduced noise levels. At 18 months post conditioning, the noise levels were maintained within 62 dB (A), 60 dB (A) and 52 dB (A) in the ventilator, isolation and pre-term room, respectively. Conditioning needs to be repeated at 12 months in the ventilator room and at 18 months in the other rooms.

  20. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  1. 21 CFR 888.3510 - Knee joint femorotibial metal/polymer constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer constrained... Knee joint femorotibial metal/polymer constrained cemented prosthesis. (a) Identification. A knee joint... of a knee joint. The device limits translation or rotation in one or more planes and has components...

  2. 21 CFR 888.3100 - Ankle joint metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ankle joint metal/composite semi-constrained... Ankle joint metal/composite semi-constrained cemented prosthesis. (a) Identification. An ankle joint... ankle joint. The device limits translation and rotation: in one or more planes via the geometry of its...

  3. Optimization of the conditions for the precipitation of thorium oxalate. II. Minimization of the product losses

    International Nuclear Information System (INIS)

    Pazukhin, E.M.; Smirnova, E.A.; Krivokhatskii, A.S.; Pazukhina, Yu.L.; Kiselev, P.P.

    1987-01-01

    The precipitation of thorium as a poorly soluble oxalate was investigated. An equation relating the concentrations of the metal and nitric acid in the initial solution and the amount of precipitant required to minimize the product losses was derived. A graphical solution of the equation is presented for the case where the precipitant is oxalic acid at a concentration of 0.78 M

  4. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  5. A Lean Framework for Production Control in Complex and Constrained Construction Projects (PC4P)

    DEFF Research Database (Denmark)

    Lindhard, Søren Munch; Wandahl, Søren

    2014-01-01

    Production conditions in construction are different than in themanufacturing industry. First of all, construction is rooted in place and conducted as on-site manufacturing. Secondly, every construction project is unique and a one-of-a-kind production, managed by a temporary organization consisting...... and constrained construction project. Even though several tools have attempted to add structure and to create order, to the complex, dynamic, and uncertain context in which constructions is conducted, none has yet fully succeeded in providing a robust production control system. With outset in the lean tool Last...

  6. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  7. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  8. Microbial decomposers not constrained by climate history along a Mediterranean climate gradient in southern California.

    Science.gov (United States)

    Baker, Nameer R; Khalili, Banafshe; Martiny, Jennifer B H; Allison, Steven D

    2018-06-01

    Microbial decomposers mediate the return of CO 2 to the atmosphere by producing extracellular enzymes to degrade complex plant polymers, making plant carbon available for metabolism. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. We analyzed mass loss, litter chemistry, microbial biomass, extracellular enzyme activities, and enzyme temperature sensitivities in grassland litter transplanted along a Mediterranean climate gradient in southern California. Microbial community composition was manipulated by caging litter within bags made of nylon membrane that prevent microbial immigration. To test whether grassland microbes were constrained by climate history, half of the bags were inoculated with local microbial communities native to each gradient site. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbial communities were not restricted in their ability to decompose litter under different climate conditions across the gradient, although microbial communities across our gradient may be restricted in their ability to degrade different types of litter. We did find some evidence that local microbial communities were optimized based on climate, but local microbial taxa that proliferated after inoculation into litterbags did not enhance litter decomposition. Our results suggest that microbial community composition does not constrain C-cycling rates under climate change in our system, but optimization to particular resource environments may act as more general constraints on microbial communities. © 2018 by the Ecological Society of America.

  9. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  10. 21 CFR 888.3358 - Hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer/metal semi-constrained... Devices § 888.3358 Hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis. (a) Identification. A hip joint metal/polymer/metal semi-constrained porous-coated uncemented prosthesis is a device...

  11. Appearance of a Minimal Length in $e^+ e^-$ Annihilation

    CERN Document Server

    Dymnikova, Irina; Ulbricht, Jürgen

    2014-01-01

    Experimental data reveal with a 5$\\sigma$ significance the existence of a characteristic minimal length $l_e$= 1.57 × 10$^{−17}$ cm at the scale E = 1.253 TeV in the annihilation reaction $e^+e^- \\to \\gamma\\gamma(\\gamma)$ . Nonlinear electrodynamics coupled to gravity and satisfying the weak energy condition predicts, for an arbitrary gauge invariant Lagrangian, the existence of spinning charged electromagnetic soliton asymptotically Kerr-Newman for a distant observer with the gyromagnetic ratio g=2 . Its internal structure includes a rotating equatorial disk of de Sitter vacuum which has properties of a perfect conductor and ideal diamagnetic, displays superconducting behavior, supplies a particle with the finite positive electromagnetic mass related to breaking of space-time symmetry, and gives some idea about the physical origin of a minimal length in annihilation.

  12. Operating envelope to minimize probability of fractures in Zircaloy-2 pressure tubes

    International Nuclear Information System (INIS)

    Azer, N.; Wong, H.

    1994-01-01

    The failure mode of primary concern with Candu pressure tubes is fast fracture of a through-wall axial crack, resulting from delayed hydride crack growth. The application of operating envelopes is demonstrated to minimize the probability of fracture in Zircaloy-2 pressure tubes based on Zr-2.5%Nb pressure tube experience. The technical basis for the development of the operating envelopes is also summarized. The operating envelope represents an area on the pressure versus temperature diagram within which the reactor may be operated without undue concern for pressure tube fracture. The envelopes presented address both normal operating conditions and the condition where a pressure tube leak has been detected. The examples in this paper are prepared to illustrate the methodology, and are not intended to be directly applicable to the operation of any specific reactor. The application of operating envelopes to minimized the probability of fracture in 80 mm diameter Zircaloy-2 pressure tubes has been discussed. Both normal operating and leaking pressure tube conditions have been considered. 3 refs., 4 figs

  13. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    Science.gov (United States)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely

  14. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Søndergaard, Hans; Ravn, Anders Peter

    2014-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating the following: (1) a lean virtual machine without any external dependencies on POSIX-like libraries or other OS functionalities; (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables; and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  15. 21 CFR 888.3340 - Hip joint metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/composite semi-constrained... Hip joint metal/composite semi-constrained cemented prosthesis. (a) Identification. A hip joint metal... hip joint. The device limits translation and rotation in one or more planes via the geometry of its...

  16. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  17. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  18. Early failure mechanisms of constrained tripolar acetabular sockets used in revision total hip arthroplasty.

    Science.gov (United States)

    Cooke, Christopher C; Hozack, William; Lavernia, Carlos; Sharkey, Peter; Shastri, Shani; Rothman, Richard H

    2003-10-01

    Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.

  19. Investigation on Capability of Reaming Process using Minimal Quantity Lubrication

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Tosello, Guido; Piska, Miroslav

    2008-01-01

    An investigation on reaming using minimal quantity lubrication (MQL) was carried out with the scope of documenting process capability using a metrological approach. Reaming tests were carried out on austenitic stainless steel, using HSS reamers with different cutting data and lubrication conditions...... depth of cut was employed. The suitability of MQL for reaming was proven under the investigated process conditions, concerning both the quality of the machined holes, in terms of geometrical characteristics and surface finishing, and the process quality, with respect to reaming torque and thrust, along...

  20. Groundwater availability as constrained by hydrogeology and environmental flows.

    Science.gov (United States)

    Watson, Katelyn A; Mayer, Alex S; Reeves, Howard W

    2014-01-01

    Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications. © 2013, National Ground Water Association.

  1. Quasicanonical structure of optimal control in constrained discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2003-06-01

    This paper considers discrete processes governed by difference rather than differential equations for the state transformation. The basic question asked is if and when Hamiltonian canonical structures are possible in optimal discrete systems. Considering constrained discrete control, general optimization algorithms are derived that constitute suitable theoretical and computational tools when evaluating extremum properties of constrained physical models. The mathematical basis of the general theory is the Bellman method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage criterion which allows a variation of the terminal state that is otherwise fixed in the Bellman's method. Two relatively unknown, powerful optimization algorithms are obtained: an unconventional discrete formalism of optimization based on a Hamiltonian for multistage systems with unconstrained intervals of holdup time, and the time interval constrained extension of the formalism. These results are general; namely, one arrives at: the discrete canonical Hamilton equations, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory along with all basic results of variational calculus. Vast spectrum of applications of the theory is briefly discussed.

  2. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  3. q-Deformed KP Hierarchy and q-Deformed Constrained KP Hierarchy

    OpenAIRE

    He, Jingsong; Li, Yinghua; Cheng, Yi

    2006-01-01

    Using the determinant representation of gauge transformation operator, we have shown that the general form of $au$ function of the $q$-KP hierarchy is a $q$-deformed generalized Wronskian, which includes the $q$-deformed Wronskian as a special case. On the basis of these, we study the $q$-deformed constrained KP ($q$-cKP) hierarchy, i.e. $l$-constraints of $q$-KP hierarchy. Similar to the ordinary constrained KP (cKP) hierarchy, a large class of solutions of $q$-cKP hierarchy can be represent...

  4. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  5. Solving Multi-Resource Constrained Project Scheduling Problem using Ant Colony Optimization

    Directory of Open Access Journals (Sweden)

    Hsiang-Hsi Huang

    2015-01-01

    Full Text Available This paper applied Ant Colony Optimization (ACO to develop a resource constraints scheduling model to achieve the resource allocation optimization and the shortest completion time of a project under resource constraints and the activities precedence requirement for projects. Resource leveling is also discussed and has to be achieved under the resource allocation optimization in this research. Testing cases and examples adopted from the international test bank were studied for verifying the effectiveness of the proposed model. The results showed that the solutions of different cases all have a better performance within a reasonable time. These can be obtained through ACO algorithm under the same constrained conditions. A program was written for the proposed model that is able to automatically produce the project resource requirement figure after the project duration is solved.

  6. Constraining estimates of methane emissions from Arctic permafrost regions with CARVE

    Science.gov (United States)

    Chang, R. Y.; Karion, A.; Sweeney, C.; Henderson, J.; Mountain, M.; Eluszkiewicz, J.; Luus, K. A.; Lin, J. C.; Dinardo, S.; Miller, C. E.; Wofsy, S. C.

    2013-12-01

    Permafrost in the Arctic contains large carbon pools that are currently non-labile, but can be released to the atmosphere as polar regions warm. In order to predict future climate scenarios, we need to understand the emissions of these greenhouse gases under varying environmental conditions. This study presents in-situ measurements of methane made on board an aircraft during the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE), which sampled over the permafrost regions of Alaska. Using measurements from May to September 2012, seasonal emission rate estimates of methane from tundra are constrained using the Stochastic Time-Inverted Lagrangian Transport model, a Lagrangian particle dispersion model driven by custom polar-WRF fields. Preliminary results suggest that methane emission rates have not greatly increased since the Arctic Boundary Layer Experiment conducted in southwest Alaska in 1988.

  7. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  8. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  9. Chance-Constrained Guidance With Non-Convex Constraints

    Science.gov (United States)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of

  10. Power Loss Minimization for Transformers Connected in Parallel with Taps Based on Power Chargeability Balance

    Directory of Open Access Journals (Sweden)

    Álvaro Jaramillo-Duque

    2018-02-01

    Full Text Available In this paper, a model and solution approach for minimizing internal power losses in Transformers Connected in Parallel (TCP with tap-changers is proposed. The model is based on power chargeability balance and seeks to keep the load voltage within an admissible range. For achieving this, tap positions are adjusted in such a way that all TCP are set in similar/same power chargeability. The main contribution of this paper is the inclusion of several construction features (rated voltage, rated power, voltage ratio, short-circuit impedance and tap steps in the minimization of power losses in TCP that are not included in previous works. A Genetic Algorithm (GA is used for solving the proposed model that is a system of nonlinear equations with discrete decision variables. The GA scans different sets for tap positions with the aim of balancing the power supplied by each transformer to the load. For this purpose, a fitness function is used for minimizing two conditions: The first condition consists on the mismatching between power chargeability for each transformer and a desired chargeability; and the second condition is the mismatching between the nominal load voltage and the load voltage obtained by changing the tap positions. The proposed method is generalized for any given number of TCP and was implemented for three TCP, demonstrating that the power losses are minimized and the load voltage remains within an admissible range.

  11. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  12. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  13. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  14. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  15. Composite Differential Evolution with Modified Oracle Penalty Method for Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Minggang Dong

    2014-01-01

    Full Text Available Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE for constrained optimization problems (COPs. More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex COPs with discrete, integer, or binary variables, a discrete variable handling technique is introduced into MOCoDE to solve complex COPs with mix variables. This method is assessed on eleven constrained optimization benchmark functions and seven well-studied engineering problems in real life. Experimental results demonstrate that MOCoDE achieves competitive performance with respect to some other state-of-the-art approaches in constrained optimization evolutionary algorithms. Moreover, the strengths of the proposed method include few parameters and its ease of implementation, rendering it applicable to real life. Therefore, MOCoDE can be an efficient alternative to solving constrained optimization problems.

  16. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with adjacent resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  17. Time-constrained project scheduling with adjacent resources

    NARCIS (Netherlands)

    Hurink, Johann L.; Kok, A.L.; Paulus, J.J.; Schutten, Johannes M.J.

    2008-01-01

    We develop a decomposition method for the Time-Constrained Project Scheduling Problem (TCPSP) with Adjacent Resources. For adjacent resources the resource units are ordered and the units assigned to a job have to be adjacent. On top of that, adjacent resources are not required by single jobs, but by

  18. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  19. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  20. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  1. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  2. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaoxue Feng

    2014-11-01

    Full Text Available Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS, which gets better filtering performance than NILS without constraint.

  3. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    Science.gov (United States)

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-01-01

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408

  4. Constrained state estimation for individual localization in wireless body sensor networks.

    Science.gov (United States)

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-11-10

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint.

  5. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  6. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  7. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  8. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  9. Super-acceleration from massless, minimally coupled phi sup 4

    CERN Document Server

    Onemli, V K

    2002-01-01

    We derive a simple form for the propagator of a massless, minimally coupled scalar in a locally de Sitter geometry of arbitrary spacetime dimension. We then employ it to compute the fully renormalized stress tensor at one- and two-loop orders for a massless, minimally coupled phi sup 4 theory which is released in Bunch-Davies vacuum at t=0 in co-moving coordinates. In this system, the uncertainty principle elevates the scalar above the minimum of its potential, resulting in a phase of super-acceleration. With the non-derivative self-interaction the scalar's breaking of de Sitter invariance becomes observable. It is also worth noting that the weak-energy condition is violated on cosmological scales. An interesting subsidiary result is that cancelling overlapping divergences in the stress tensor requires a conformal counterterm which has no effect on purely scalar diagrams.

  10. Client's constraining factors to construction project management ...

    African Journals Online (AJOL)

    This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...

  11. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  12. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  13. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  14. Using 10Be erosion rates and fluvial channel morphology to constrain fault throw rates in the southwestern Sacramento River Valley, California, USA

    Science.gov (United States)

    Cyr, A. J.

    2013-12-01

    activity on the west-vergent Sweitzer fault and the east-vergent blind reverse fault. All of the sampled catchments are underlain exclusively by Tehama Sandstone. Moreover, there are no mapped surface traces of faults in the sampled catchments. This minimizes the possibility of changes in lithogic resistance to impact the erosion rates and channel analyses. These analyses, combined with fault geometries derived from published seismic reflection data and structural cross sections, allows us to constrain the throw rates on these faults and thus better evaluate the associated seismic hazard.

  15. 21 CFR 888.3550 - Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemorotibial polymer/metal/metal... § 888.3550 Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis. (a) Identification. A knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis is a device...

  16. 21 CFR 888.3490 - Knee joint femorotibial metal/composite non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/composite non... § 888.3490 Knee joint femorotibial metal/composite non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite non-constrained cemented prosthesis is a device...

  17. Minimal groups increase young children's motivation and learning on group-relevant tasks.

    Science.gov (United States)

    Master, Allison; Walton, Gregory M

    2013-01-01

    Three experiments (N = 130) used a minimal group manipulation to show that just perceived membership in a social group boosts young children's motivation for and learning from group-relevant tasks. In Experiment 1, 4-year-old children assigned to a minimal "puzzles group" persisted longer on a challenging puzzle than children identified as the "puzzles child" or children in a control condition. Experiment 2 showed that this boost in motivation occurred only when the group was associated with the task. In Experiment 3, children assigned to a minimal group associated with word learning learned more words than children assigned an analogous individual identity. The studies demonstrate that fostering shared motivations may be a powerful means by which to shape young children's academic outcomes. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.

  18. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  19. Exergy optimization of cooling tower for HGSHP and HVAC applications

    International Nuclear Information System (INIS)

    Singh, Kuljeet; Das, Ranjan

    2017-01-01

    Highlights: • Development of new correlations for outlet parameters with all inlet parameters. • Simultaneous achievement of required heat load and minimum exergy destruction. • Multiple combinations of parameters found for same heat load at minimized exergy. • Study useful for optimum control of cooling tower under varying ambient conditions. • Generalized optimization study can be implemented for any mechanical cooling tower. - Abstract: In the present work, a constrained inverse optimization method for building cooling applications is proposed to control the mechanical draft wet cooling tower by minimizing the exergy destruction and satisfying an imposed heat load under varying environmental conditions. The optimization problem is formulated considering the cooling dominated heating, ventilation and air conditioning (HVAC) and hybrid ground source heat pump (HGSHP). As per the requirement, new second degree correlations for the tower outlet parameters (water temperature, air dry and wet-bulb temperatures) with five inlet parameters (dry-bulb temperature, relative humidity, water inlet temperature, water and air mass flow rates) are developed. The Box–Behnken design response surface method is implemented for developing the correlations. Subsequently, the constrained optimization problem is solved using augmented Lagrangian genetic algorithm. This work further developed optimum inlet parameters operating curves for the HGSHP and the HVAC systems under varying environmental conditions aimed at minimizing the exergy destruction along with the fulfillment of the required heat load.

  20. Estimates on the minimal period for periodic solutions of nonlinear second order Hamiltonian systems

    International Nuclear Information System (INIS)

    Yiming Long.

    1994-11-01

    In this paper, we prove a sharper estimate on the minimal period for periodic solutions of autonomous second order Hamiltonian systems under precisely Rabinowitz' superquadratic condition. (author). 20 refs, 1 fig

  1. 21 CFR 888.3320 - Hip joint metal/metal semi-constrained, with a cemented acetabular component, prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/metal semi-constrained, with a... Devices § 888.3320 Hip joint metal/metal semi-constrained, with a cemented acetabular component, prosthesis. (a) Identification. A hip joint metal/metal semi-constrained, with a cemented acetabular...

  2. 21 CFR 888.3330 - Hip joint metal/metal semi-constrained, with an uncemented acetabular component, prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/metal semi-constrained, with an... Devices § 888.3330 Hip joint metal/metal semi-constrained, with an uncemented acetabular component, prosthesis. (a) Identification. A hip joint metal/metal semi-constrained, with an uncemented acetabular...

  3. Portfolio balancing and risk adjusted values under constrained budget conditions

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1996-01-01

    For a given hydrocarbon exploration opportunity, the influences of value, cost, success probability and corporate risk tolerance provide an optimal working interest that should be taken in the opportunity in order to maximize the risk adjusted value. When several opportunities are available, but when the total budget is insufficient to take optimal working interest in each, an analytic procedure is given for optimizing the risk adjusted value of the total portfolio; the relevant working interests are also derived based on a cost exposure constraint. Several numerical illustrations are provided to exhibit the use of the method under different budget conditions, and with different numbers of available opportunities. When value, cost, success probability, and risk tolerance are uncertain for each and every opportunity, the procedure is generalized to allow determination of probable optimal risk adjusted value for the total portfolio and, at the same time, the range of probable working interest that should be taken in each opportunity is also provided. The result is that the computations of portfolio balancing can be done quickly in either deterministic or probabilistic manners on a small calculator, thereby providing rapid assessments of opportunities and their worth to a corporation. (Author)

  4. Synthesis of conformationally constrained peptidomimetics using multicomponent reactions

    NARCIS (Netherlands)

    Scheffelaar, R.; Klein Nijenhuis, R.A.; Paravidino, M.; Lutz, M.; Spek, A.L.; Ehlers, A.W.; de Kanter, F.J.J.; Groen, M.B.; Orru, R.V.A.; Ruijter, E.

    2009-01-01

    A novel modular synthetic approach toward constrained peptidomimetics is reported. The approach involves a highly efficient three-step sequence including two multicomponent reactions, thus allowing unprecedented diversification of both the peptide moieties and the turn-inducing scaffold. The

  5. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  6. 21 CFR 888.3530 - Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer semi... § 888.3530 Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer semi-constrained cemented prosthesis is a device intended...

  7. 21 CFR 888.3540 - Knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemoral polymer/metal semi... § 888.3540 Knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis. (a) Identification. A knee joint patellofemoral polymer/metal semi-constrained cemented prosthesis is a two-part...

  8. 21 CFR 888.3500 - Knee joint femorotibial metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/composite semi... § 888.3500 Knee joint femorotibial metal/composite semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite semi-constrained cemented prosthesis is a two-part...

  9. 21 CFR 888.3520 - Knee joint femorotibial metal/polymer non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint femorotibial metal/polymer non... § 888.3520 Knee joint femorotibial metal/polymer non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer non-constrained cemented prosthesis is a device intended to...

  10. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  11. Decentralizing constrained-efficient allocations in the Lagos–Wright pure currency economy

    OpenAIRE

    Bajaj, Ayushi; Hu, Tai Wei; Rocheteau, Guillaume; Silva, Mario Rafael

    2017-01-01

    This paper offers two ways to decentralize the constrained-efficient allocation of the Lagos–Wright (2005) pure currency economy. The first way has divisible money, take-it-or-leave-it offers by buyers, and a transfer scheme financed by money creation. If agents are sufficiently patient, the first best is achieved for finite money growth rates. If agents are impatient, the equilibrium allocation approaches the constrained-efficient allocation asymptotically as the money growth rate tends to i...

  12. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  13. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  14. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  15. Constraining the break of spatial diffeomorphism invariance with Planck data

    Energy Technology Data Exchange (ETDEWEB)

    Graef, L.L.; Benetti, M.; Alcaniz, J.S., E-mail: leilagraef@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br [Departamento de Astronomia, Observatório Nacional, R. Gen. José Cristino, 77—São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r , and the tensor spectral index, n {sub t} . If the inflation consistency relation is imposed, r =−8 n {sub t} , we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  16. Constraining the break of spatial diffeomorphism invariance with Planck data

    Science.gov (United States)

    Graef, L. L.; Benetti, M.; Alcaniz, J. S.

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r, and the tensor spectral index, nt. If the inflation consistency relation is imposed, r=-8 nt, we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  17. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  18. Local empathy provides global minimization of congestion in communication networks

    Science.gov (United States)

    Meloni, Sandro; Gómez-Gardeñes, Jesús

    2010-11-01

    We present a mechanism to avoid congestion in complex networks based on a local knowledge of traffic conditions and the ability of routers to self-coordinate their dynamical behavior. In particular, routers make use of local information about traffic conditions to either reject or accept information packets from their neighbors. We show that when nodes are only aware of their own congestion state they self-organize into a hierarchical configuration that delays remarkably the onset of congestion although leading to a sharp first-order-like congestion transition. We also consider the case when nodes are aware of the congestion state of their neighbors. In this case, we show that empathy between nodes is strongly beneficial to the overall performance of the system and it is possible to achieve larger values for the critical load together with a smooth, second-order-like, transition. Finally, we show how local empathy minimize the impact of congestion as much as global minimization. Therefore, here we present an outstanding example of how local dynamical rules can optimize the system’s functioning up to the levels reached using global knowledge.

  19. GPS-based ionospheric tomography with a constrained adaptive ...

    Indian Academy of Sciences (India)

    Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the ... the research focus in the fields of space geodesy and ... ment of GNSS such as GPS, Glonass, Galileo and. Compass, as these ...

  20. Y-12 Plant waste minimization strategy

    International Nuclear Information System (INIS)

    Kane, M.A.

    1987-01-01

    The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

  1. 21 CFR 888.3310 - Hip joint metal/polymer constrained cemented or uncemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hip joint metal/polymer constrained cemented or... Hip joint metal/polymer constrained cemented or uncemented prosthesis. (a) Identification. A hip joint... replace a hip joint. The device prevents dislocation in more than one anatomic plane and has components...

  2. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  3. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  4. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  5. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  6. Factors constraining accessibility and usage of information among ...

    African Journals Online (AJOL)

    Various factors may negatively impact on information acquisition and utilisation. To improve understanding of the determinants of information acquisition and utilisation, this study investigated the factors constraining accessibility and usage of poultry management information in three rural districts of Tanzania. The findings ...

  7. Developing a Coding Scheme to Analyse Creativity in Highly-constrained Design Activities

    DEFF Research Database (Denmark)

    Dekoninck, Elies; Yue, Huang; Howard, Thomas J.

    2010-01-01

    This work is part of a larger project which aims to investigate the nature of creativity and the effectiveness of creativity tools in highly-constrained design tasks. This paper presents the research where a coding scheme was developed and tested with a designer-researcher who conducted two rounds...... of design and analysis on a highly constrained design task. This paper shows how design changes can be coded using a scheme based on creative ‘modes of change’. The coding scheme can show the way a designer moves around the design space, and particularly the strategies that are used by a creative designer...... larger study with more designers working on different types of highly-constrained design task is needed, in order to draw conclusions on the modes of change and their relationship to creativity....

  8. Constraining the Q10 of respiration in water-limited environments

    Science.gov (United States)

    Collins, A.; Ryan, M. G.; Xu, C.; Grossiord, C.; Michaletz, S. T.; McDowell, N. G.

    2016-12-01

    If the current rate of greenhouse emissions remains constant over the next few decades, projections of climate change forecast increased atmospheric temperatures by a least 1.1°C by the end of the century. Warmer temperatures are expected to largely influence the exchange of energy, carbon and water between plants and the atmosphere. Several studies support that terrestrial ecosystems currently act as a major carbon sink, however warmer temperatures may amplify respiration processes and shift terrestrial ecosystems from a sink to a source of carbon in the future. Most Earth System Models incorporate the temperature dependence of plant respiration (Q10) to estimate and predict respiration processes and associated carbon fluxes. Using a temperature and precipitation manipulation experiment in natural conditions, we present evidence that this parameter is poorly constrained especially in water-limited environments. We discuss the utility of the Q10 framework and suggest improvements for this parameter along with trait-based approaches to better resolve models.

  9. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  10. One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times

    NARCIS (Netherlands)

    Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart

    1997-01-01

    In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some

  11. Theoretical calculation of reorganization energy for electron self-exchange reaction by constrained density functional theory and constrained equilibrium thermodynamics.

    Science.gov (United States)

    Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan

    2013-08-22

    Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.

  12. Applications of a constrained mechanics methodology in economics

    International Nuclear Information System (INIS)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  13. Applications of a constrained mechanics methodology in economics

    Science.gov (United States)

    Janová, Jitka

    2011-11-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  14. Applications of a constrained mechanics methodology in economics

    Energy Technology Data Exchange (ETDEWEB)

    Janova, Jitka, E-mail: janova@mendelu.cz [Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemedelska 1, 613 00 Brno (Czech Republic)

    2011-11-15

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  15. Extended shadow test approach for constrained adaptive testing

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; Ariel, A.

    2002-01-01

    Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more

  16. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  17. Conditional uncertainty principle

    Science.gov (United States)

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  18. 21 CFR 888.3560 - Knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Knee joint patellofemorotibial polymer/metal... Devices § 888.3560 Knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis. (a) Identification. A knee joint patellofemorotibial polymer/metal/polymer semi-constrained...

  19. Effects of a Cooperative Learning Strategy on the Effectiveness of Physical Fitness Teaching and Constraining Factors

    Directory of Open Access Journals (Sweden)

    Tsui-Er Lee

    2014-01-01

    Full Text Available The effects of cooperative learning and traditional learning on the effectiveness and constraining factors of physical fitness teaching under various teaching conditions were studied. Sixty female students in Grades 7–8 were sampled to evaluate their learning of health and physical education (PE according to the curriculum for Grades 1–9 in Taiwan. The data were quantitatively and qualitatively collected and analyzed. The overall physical fitness of the cooperative learning group exhibited substantial progress between the pretest and posttest, in which the differences in the sit-and-reach and bent-knee sit-up exercises achieved statistical significance. The performance of the cooperative learning group in the bent-knee sit-up and 800 m running exercises far exceeded that of the traditional learning group. Our qualitative data indicated that the number of people grouped before a cooperative learning session, effective administrative support, comprehensive teaching preparation, media reinforcement, constant feedback and introspection regarding cooperative learning strategies, and heterogeneous grouping are constraining factors for teaching PE by using cooperative learning strategies. Cooperative learning is considered an effective route for attaining physical fitness among students. PE teachers should consider providing extrinsic motivation for developing learning effectiveness.

  20. Minimally invasive approaches for the treatment of inflammatory bowel disease

    Institute of Scientific and Technical Information of China (English)

    Marco Zoccali; Alessandro Fichera

    2012-01-01

    Despite significant improvements in medical management of inflammatory bowel disease,many of these patients still require surgery at some point in the course of their disease.Their young age and poor general conditions,worsened by the aggressive medical treatments,make minimally invasive approaches particularly enticing to this patient population.However,the typical inflammatory changes that characterize these diseases have hindered wide diffusion of laparoscopy in this setting,currently mostly pursued in high-volume referral centers,despite accumulating evidences in the literature supporting the benefits of minimally invasive surgery.The largest body of evidence currently available for terminal ileal Crohn's disease shows improved short term outcomes after laparoscopic surgery,with prolonged operative times.For Crohn's colitis,high quality evidence supporting laparoscopic surgery is lacking.Encouraging preliminary results have been obtained with the adoption of laparoscopic restorative total proctocolectomy for the treatment of ulcerative colitis.A consensus about patients' selection and the need for staging has not been reached yet.Despite the lack of conclusive evidence,a wave of enthusiasm is pushing towards less invasive strategies,to further minimize surgical trauma,with single incision laparoscopic surgery being the most realistic future development.