Minimal constrained supergravity
Directory of Open Access Journals (Sweden)
N. Cribiori
2017-01-01
Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
Study of constrained minimal supersymmetry
Kane, G L; Roszkowski, Leszek; Wells, J D; Chris Kolda; Leszek Roszkowski; James D Wells
1994-01-01
Taking seriously phenomenological indications for supersymmetry, we have made a detailed study of unified minimal SUSY, including effects at the few percent level in a consistent fashion. We report here a general analysis without choosing a particular unification gauge group. We find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10% level leads to several improvements of previous results, and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space. The results form a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by LEP or FNAL so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the SUSY spectrum, detectability of SUSY at LEP II or FNAL...
Constrained minimization of smooth functions using a genetic algorithm
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Sequential unconstrained minimization algorithms for constrained optimization
Byrne, Charles
2008-02-01
The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal
Utility Constrained Energy Minimization In Aloha Networks
Khodaian, Amir Mahdi; Talebi, Mohammad S
2010-01-01
In this paper we consider the issue of energy efficiency in random access networks and show that optimizing transmission probabilities of nodes can enhance network performance in terms of energy consumption and fairness. First, we propose a heuristic power control method that improves throughput, and then we model the Utility Constrained Energy Minimization (UCEM) problem in which the utility constraint takes into account single and multi node performance. UCEM is modeled as a convex optimization problem and Sequential Quadratic Programming (SQP) is used to find optimal transmission probabilities. Numerical results show that our method can achieve fairness, reduce energy consumption and enhance lifetime of such networks.
Algorithms for degree-constrained Euclidean Steiner minimal tree
Institute of Scientific and Technical Information of China (English)
Zhang Jin; Ma Liang; Zhang Liantang
2008-01-01
A new problem of degree-constrained Euclidean Steiner minimal tree is discussed,which is quite useful in several fields.Although it is slightly different from the traditional degree-constrained minimal spanning tree,it is aho NP-hard.Two intelligent algorithms are proposed in an attempt to solve this difficult problem.Series of numerical examples are tested,which demonstrate that the algorithms also work well in practice.
Fast Energy Minimization of large Polymers Using Constrained Optimization
Energy Technology Data Exchange (ETDEWEB)
Todd D. Plantenga
1998-10-01
A new computational technique is described that uses distance constraints to calculate empirical potential energy minima of partially rigid molecules. A constrained minimuzation algorithm that works entirely in Cartesian coordinates is used. The algorithm does not obey the constraints until convergence, a feature that reduces ill-conditioning and allows constrained local minima to be computed more quickly than unconstrained minima. Computational speedup exceeds the 3-fold factor commonly obtained in constained molecular dynamics simulations, where the constraints must be strictly obeyed at all times.
Interference Alignment as a Rank Constrained Rank Minimization
Papailiopoulos, Dimitris S
2010-01-01
We show that the maximization of the sum degrees-of-freedom for the static flat-fading multiple-input multiple-output (MIMO) interference channel is equivalent to a rank constrained rank minimization problem, when the signal spaces span all available dimensions. The rank minimization corresponds to maximizing interference alignment (IA) such that interference spans the lowest dimensional subspace possible. The rank constraints account for the useful signal spaces spanning all available spatial dimensions. That way, we reformulate all IA requirements to requirements involving ranks. Then, we present a convex relaxation of the RCRM problem inspired by recent results in compressed sensing and low-rank matrix completion theory that rely on approximating rank with the nuclear norm. We show that the convex envelope of the sum of ranks of the interference matrices is the sum of their corresponding nuclear norms and introduce tractable constraints that are asymptotically equivalent to the rank constraints for the ini...
Revisiting the target-constrained interference-minimized filter (TCIMF)
Chang, Chein-I.; Ren, Hsuan; Hsueh, Mingkai; Du, Qian; D"Amico, Francis M.; Jensen, James O.
2003-12-01
The Orthogonal Subspace Projection (OSP) and Constrained Energy Minimization (CEM) have been used in hyperpsectral target detection and classification. A target-constrained interference-minimized filter (TCIMF) was recently proposed to extend the CEM to improve signal detectability to annihilating undesired target signal sources as the way carried out in the OSP. In this paper, we revisit the TCIMF from a signal processing viewpoint where signals can be characterized by three types of information sources, desired target sources and undesired target sources, both of which are provided a priori, and interferers which are unknown interfering sources. By virtue of such signal decomposition, we chan show that the TCIMF is actually a generalization of the OSP and CEM. In particular, we investigate assumptions made for the OSP and CEM in terms of these three types of signal sources and exploit insights into their filter design. As will be shown in this paper, the OSP and the CEM perform the same tasks by operating different levels of information and both can be viewed as special cases of the TCIMF.
An introduction to nonlinear programming. IV - Numerical methods for constrained minimization
Sorenson, H. W.; Koble, H. M.
1976-01-01
An overview is presented of the numerical solution of constrained minimization problems. Attention is given to both primal and indirect (linear programs and unconstrained minimizations) methods of solution.
High resolution image reconstruction with constrained, total-variation minimization
Sidky, Emil Y; Duchin, Yuval; Ullberg, Christer; Pan, Xiaochuan
2011-01-01
This work is concerned with applying iterative image reconstruction, based on constrained total-variation minimization, to low-intensity X-ray CT systems that have a high sampling rate. Such systems pose a challenge for iterative image reconstruction, because a very fine image grid is needed to realize the resolution inherent in such scanners. These image arrays lead to under-determined imaging models whose inversion is unstable and can result in undesirable artifacts and noise patterns. There are many possibilities to stabilize the imaging model, and this work proposes a method which may have an advantage in terms of algorithm efficiency. The proposed method introduces additional constraints in the optimization problem; these constraints set to zero high spatial frequency components which are beyond the sensing capability of the detector. The method is demonstrated with an actual CT data set and compared with another method based on projection up-sampling.
Investigating multiple solutions in the constrained minimal supersymmetric standard model
Energy Technology Data Exchange (ETDEWEB)
Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)
2014-02-07
Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.
Constraining non-minimally coupled tachyon fields by Noether symmetry
de Souza, Rudinei C
2008-01-01
A model for a spatially flat homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. Noether symmetry is used to find the expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of the gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is the responsible of the decelerated-accelerated transition period.
Discontinuous penalty approach with deviation integral for global constrained minimization
Institute of Scientific and Technical Information of China (English)
Liu CHEN; Yi-rong YAO; Quan ZHENG
2009-01-01
of the penalized minimization problems are proven.To implement the algorithm,the cross-entropy method and the importance sampling are used based on the Monte-Carlo technique.Numerical tests show the effectiveness of the proposed algorithm.
Constrained Spectral Conditioning for spatial sound level estimation
Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.
2016-11-01
Microphone arrays are utilized in aeroacoustic testing to spatially map the sound emitted from an article under study. Whereas a single microphone allows only the total sound level to be estimated at the measurement location, an array permits differentiation between the contributions of distinct components. The accuracy of these spatial sound estimates produced by post-processing the array outputs is continuously being improved. One way of increasing the estimation accuracy is to filter the array outputs before they become inputs to a post-processor. This work presents a constrained method of linear filtering for microphone arrays which minimizes the total signal present on the array channels while preserving the signal from a targeted spatial location. Thus, each single-channel, filtered output for a given targeted location estimates only the signal from that location, even when multiple and/or distributed sources have been measured simultaneously. The method is based on Conditioned Spectral Analysis and modifies the Wiener-Hopf equation in a manner similar to the Generalized Sidelobe Canceller. This modified form of Conditioned Spectral Analysis is embedded within an iterative loop and termed Constrained Spectral Conditioning. Linear constraints are derived which prevent the cancellation of targeted signal due to random statistical error as well as location error in the sensor and/or source positions. The increased spatial mapping accuracy of Constrained Spectral Conditioning is shown for a simulated dataset of point sources which vary in strength. An experimental point source is used to validate the efficacy of the constraints which yield preservation of the targeted signal at the expense of reduced filtering ability. The beamforming results of a cold, supersonic jet demonstrate the qualitative and quantitative improvement obtained when using this technique to map a spatially-distributed, complex, and possibly coherent sound source.
Algorithm for Delay-Constrained Minimal Cost Group Multicasting
Institute of Scientific and Technical Information of China (English)
SUN Yugeng; WANG Yanlin; YAN Xinfang
2005-01-01
Group multicast routing algorithms satisfying quality of service requirements of real-time applications are essential for high-speed networks. A heuristic algorithm was presented for group multicast routing with bandwidth and delay constrained. A new metric was designed as a function of available bandwidth and delay of link. And source-specific routing trees for each member were generated in the algorithm by using the metric, which satisfy member′s bandwidth and end-to-end delay requirements. Simulations over random network were carried out to compare the performance of the proposed algorithm with that from literature.Experimental results show that the algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover,the algorithm can avoid link blocking and enhance the network behavior efficiently.
Institute of Scientific and Technical Information of China (English)
De Tong ZHU
2008-01-01
We extend the classical affine scaling interior trust region algorithm for the linear con-strained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in associ-ation with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.
AMG by element agglomeration and constrained energy minimization interpolation
Energy Technology Data Exchange (ETDEWEB)
Kolev, T V; Vassilevski, P S
2006-02-17
This paper studies AMG (algebraic multigrid) methods that utilize energy minimization construction of the interpolation matrices locally, in the setting of element agglomeration AMG. The coarsening in element agglomeration AMG is done by agglomerating fine-grid elements, with coarse element matrices defined by a local Galerkin procedure applied to the matrix assembled from the individual fine-grid element matrices. This local Galerkin procedure involves only the coarse basis restricted to the agglomerated element. To construct the coarse basis, one exploits previously proposed constraint energy minimization procedures now applied to the local matrix. The constraints are that a given set of vectors should be interpolated exactly, not only globally, but also locally on every agglomerated element. The paper provides algorithmic details, as well as a convergence result based on a ''local-to-global'' energy bound of the resulting multiple-vector fitting AMG interpolation mappings. A particular implementation of the method is illustrated with a set of numerical experiments.
A constrained, total-variation minimization algorithm for low-intensity X-ray CT
Sidky, Emil Y; Ullberg, Christer; Pan, Xiaochuan
2010-01-01
Purpose: We develop an iterative image-reconstruction algorithm for application to low-intensity computed tomography (CT) projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector-bin width. Method: Recovering the resolution on the scale of a detector bin, requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV-minimization, as suggested by compressive sensing. The presented pseudo-code for solving constrained, TV-minimization is designed to yield an accurate solution to this optimization problem within 100 iterations. Results: The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire, to test resolution. The proposed algorithm is compared with filtere...
Newton-Type Greedy Selection Methods for $\\ell_0$-Constrained Minimization.
Yuan, Xiao-Tong; Liu, Qingshan
2017-01-11
We introduce a family of Newton-type greedy selection methods for ℓ0-constrained minimization problems. The basic idea is to construct a quadratic function to approximate the original objective function around the current iterate and solve the constructed quadratic program over the cardinality constraint. The next iterate is then estimated via a line search operation between the current iterate and the solution of the sparse quadratic program. This iterative procedure can be interpreted as an extension of the constrained Newton methods from convex minimization to non-convex ℓ0-constrained minimization. We show that the proposed algorithms converge asymptotically and the rate of local convergence is superlinear up to certain estimation precision. Our methods compare favorably against several state-of-the-art alternatives when applied to sparse logistic regression and sparse support vector machines.
AN IMPLEMENTABLE ALGORITHM AND ITS CONVERGENCE FOR GLOBAL MINIMIZATION WITH CONSTRAINS
Institute of Scientific and Technical Information of China (English)
李善良; 邬冬华; 田蔚文; 张连生
2003-01-01
With the integral-level approach to global optimization, a class of discon-tinuous penalty functions is proposed to solve constrained minimization problems. Inthis paper we propose an implementable algorithm by means of the good point set ofuniform distribution which conquers the default of Monte-Carlo method. At last weprove the convergence of the implementable algorithm.
Minimizers of a Class of Constrained Vectorial Variational Problems: Part I
Hajaiej, Hichem
2014-04-18
In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.
Existence of Dyons in Minimally Gauged Skyrme Model via Constrained Minimization
Gao, Zhifeng
2011-01-01
We prove the existence of electrically and magnetically charged particlelike static solutions, known as dyons, in the minimally gauged Skyrme model developed by Brihaye, Hartmann, and Tchrakian. The solutions are spherically symmetric, depend on two continuous parameters, and carry unit monopole and magnetic charges but continuous Skyrme charge and non-quantized electric charge induced from the 't Hooft electromagnetism. The problem amounts to obtaining a finite-energy critical point of an indefinite action functional, arising from the presence of electricity and the Minkowski spacetime signature. The difficulty with the absence of the Higgs field is overcome by achieving suitable strong convergence and obtaining uniform decay estimates at singular boundary points so that the negative sector of the action functional becomes tractable.
Logarithmic Minimal Models with Robin Boundary Conditions
Bourgine, Jean-Emile; Tartaglia, Elena
2016-01-01
We consider general logarithmic minimal models ${\\cal LM}(p,p')$, with $p,p'$ coprime, on a strip of $N$ columns with the $(r,s)$ Robin boundary conditions introduced by Pearce, Rasmussen and Tipunin. The associated conformal boundary conditions are labelled by the Kac labels $r\\in{\\Bbb Z}$ and $s\\in{\\Bbb N}$. The Robin vacuum boundary condition, labelled by $(r,s\\!-\\!\\frac{1}{2})=(0,\\mbox{$\\textstyle \\frac{1}{2}$})$, is given as a linear combination of Neumann and Dirichlet boundary conditions. The general $(r,s)$ Robin boundary conditions are constructed, using fusion, by acting on the Robin vacuum boundary with an $(r,s)$-type seam consisting of an $r$-type seam of width $w$ columns and an $s$-type seam of width $d=s-1$ columns. The $r$-type seam admits an arbitrary boundary field which we fix to the special value $\\xi=-\\tfrac{\\lambda}{2}$ where $\\lambda=\\frac{(p'-p)\\pi}{2p'}$ is the crossing parameter. The $s$-type boundary introduces $d$ defects into the bulk. We consider the associated quantum Hamiltoni...
Wormholes minimally violating the null energy condition
Bouhmadi-Lopez, Mariam; Martin-Moruno, Prado
2014-01-01
We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted "the little sibling of the big rip", where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyze the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalisation of the eq...
Institute of Scientific and Technical Information of China (English)
吴斌; 崔洪泉; 郑权
2005-01-01
A class of discontinuous penalty functions was proposed to solve constrained minimization problems with the integral approach to global optimization. m-mean value and v-variance optimality conditions of a constrained and penalized minimization problem were investigated. A nonsequential algorithm was proposed. Numerical examples were given to illustrate the effectiveness of the algorithm.
Wormholes minimally violating the null energy condition
Bouhmadi-López, Mariam; Lobo, Francisco S. N.; Martín-Moruno, Prado
2014-11-01
We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted "the little sibling of the big rip", where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.
Wormholes minimally violating the null energy condition
Energy Technology Data Exchange (ETDEWEB)
Bouhmadi-López, Mariam [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal); Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt [Centro de Astronomia e Astrofísica da Universidade de Lisboa, Campo Grande, Edifício C8, 1749-016 Lisboa (Portugal)
2014-11-01
We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
Gato-Rivera, Beatriz
1992-01-01
A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.
Predictions for constrained minimal supersymmetry with bottom-$\\tau$ mass unification
Kolda, Christopher; Wells, J D; Kane, G L; Chris Kolda; Leszek Roszkowski; James D Wells
1994-01-01
We examine the Constrained Minimal Supersymmetric Standard Model (CMSSM) with an additional requirement of strict b - .tau. unification in the region of small tan(.beta.). We find that the parameter space becomes completely limited below about 1 TeV by physical constraints alone, without a fine-tuning constraint. We study the resulting phenomenological consequences, and point out several ways of falsifying the adopted b - .tau. unification assumption. We also comment on the effect of a constraint from the non-observation of proton decay.
A COMBINATORIAL PROPERTY OF PALLET-CONSTRAINED TWO MACHINE FLOW SHOP PROBLEM IN MINIMIZING MAKESPAN
Institute of Scientific and Technical Information of China (English)
HOU Sixiang; Han Hoogeveen; Petra Schuurman
2002-01-01
We consider the problem of scheduling n jobs in a pallet-constrained flowshop so as to minimize the makespan. In such a flow shop environment, each job needs apallet the entire time, from the start of its first operation until the completion of the lastoperation, and the number of pallets in the shop at any given time is limited by a positiveinteger K ≤ n. Generally speaking, the optimal schedules may be passing schedules. In thispaper, we present a combinatorial property which shows that for two machines, K(K ≥ 3)pallets, there exists a no-passing schedule which is an optimal schedule for n ≤ 2K - 1 and2K - 1 is tight.
Solving the Resource Constrained Project Scheduling Problem to Minimize the Financial Failure Risk
Directory of Open Access Journals (Sweden)
Zhi Jie Chen
2012-04-01
Full Text Available In practice, a project usually involves cash in- and out-flows associated with each activity. This paper aims to minimize the payment failure risk during the project execution for the resource-constrained project scheduling problem (RCPSP. In such models, the money-time value, which is the product of the net cash in-flow and the time length from the completion time of each activity to the project deadline, provides a financial evaluation of project cash availability. The cash availability of a project schedule is defined as the sum of these money-time values associated with all activities, which is mathematically equivalent to the minimization objective of total weighted completion time. This paper presents four memetic algorithms (MAs which differ in the construction of initial population and restart strategy, and a double variable neighborhood search algorithm for solving the RCPSP problem. An experiment is conducted to evaluate the performance of these algorithms based on the same number of solutions calculated using ProGen generated benchmark instances. The results indicate that the MAs with regret biased sampling rule to generate initial and restart populations outperforms the other algorithms in terms of solution quality.
Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem
Directory of Open Access Journals (Sweden)
Xiaomei Zhang
2012-01-01
Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.
Constraining Pre-Eruptive Storage Conditions at Nea Kameni, Santorini
Hunt, C. A.; Barclay, J.; Pyle, D. M.
2005-12-01
Santorini is one of the most active volcanic centres on the South Aegean Volcanic Arc. Historic volcanism culminated in the catastrophic eruption in 1600BC which destroyed the Minoan civilisation and generated a water filled caldera 8 km long and 4 km wide. Recent activity has been confined to formation of two intra-caldera shield volcanoes, the Kameni islands, the last eruption occurring in 1950. The Kameni lavas are sparsely porphyritic dacites containing phenocrysts of labradorite, augite, hypersthene and magnetite. Xenocrysts of olivine, anorthite and Mg-rich augite are derived from the disaggregation of mafic xenoliths. The most striking feature of these lavas is their relatively homogeneous nature despite 2000 years of activity. The low variability in silica content (64-68 wt. %) has prompted suggestions that the chamber may be chemically or thermally buffered. Even with numerous petrological studies, little is known about the dynamics of the system. In this instance experimental phase equilibria are unequivocally one of the best ways of establishing the `real' phenocryst assemblage. This research uses phase equilibria experiments to constrain pre-eruptive storage conditions at Nea Kameni. Using a natural sample from the 1866-1870 Georgios lavas as a starting composition, a series of water-saturated experiments have been undertaken in a rapid quench Cold Seal Pressure Vessel over a temperature and pressure range of 800-900°C and 80-150 MPa. Comparison of the experimental phase equilibria with the natural assemblage suggests magma storage at water saturated conditions at >900 °mathrm{C} and >100 MPa. These results suggest the magma chamber is at a depth of ~2.8-3.5 km, which is comparable to the calculated depth estimates of earlier studies (2-4 km). Further experiments at higher temperatures will constrain the system more closely and these results will also be presented.
Ellis, John; Savage, Christopher; Spanos, Vassilis C
2010-01-01
We evaluate the neutrino fluxes to be expected from neutralino LSP annihilations inside the Sun, within the minimal supersymmetric extension of the Standard Model with supersymmetry-breaking scalar and gaugino masses constrained to be universal at the GUT scale (the CMSSM). We find that there are large regions of typical CMSSM $(m_{1/2}, m_0)$ planes where the LSP density inside the Sun is not in equilibrium, so that the annihilation rate may be far below the capture rate. We show that neutrino fluxes are dependent on the solar model at the 20% level, and adopt the AGSS09 model of Serenelli et al. for our detailed studies. We find that there are large regions of the CMSSM $(m_{1/2}, m_0)$ planes where the capture rate is not dominated by spin-dependent LSP-proton scattering, e.g., at large $m_{1/2}$ along the CMSSM coannihilation strip. We calculate neutrino fluxes above various threshold energies for points along the coannihilation/rapid-annihilation and focus-point strips where the CMSSM yields the correct ...
Energy Technology Data Exchange (ETDEWEB)
Felmy, A.R.
1990-04-01
This document is a user's manual and technical reference for the computerized chemical equilibrium model GMIN. GMIN calculates the chemical composition of systems composed of pure solid phases, solid-solution phases, gas phases, adsorbed phases, and the aqueous phase. In the aqueous phase model, the excess solution free energy is modeled by using the equations developed by PITZER and his coworkers, which are valid to high ionic strengths. The Davies equation can also be used. Activity coefficients for nonideal soild-solution phases are calculated using parameters of polynomial expansion in mole fraction of the excess free energy of mixing. The free energy of adsorbed phase species is described by the triple-layer site-binding model. The mathematical algorithm incorporated into GMIN is based upon a constrained minimization of the Gibbs free energy. This algorithm is numerically stable and reliably converges to a free energy minimum. The data base for GMIN contains all standard chemical potentials and Pitzer ion-interaction parameters necessary to model the system Na-K-Ca-Mg-H-Cl-SO{sub 4}-CO{sub 2}-B(OH){sub 4}-H{sub 2}0 at 25{degrees}C.
Directory of Open Access Journals (Sweden)
Zhanpeng Fang
2015-01-01
Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.
Heat Flow for the Minimal Surface with Plateau Boundary Condition
Institute of Scientific and Technical Information of China (English)
Kung Ching CHANG; Jia Quan LIU
2003-01-01
The heat flow for the minimal surface under Plateau boundary condition is defined to be aparabolic variational inequality, and then the existence, uniqueness, regularity, continuous dependenceon the initial data and the asymptotics are studied. It is applied as a deformation of the level sets inthe critical point theory.
Invasive and minimally invasive surgical techniques for back pain conditions.
Lavelle, William; Carl, Allen; Lavelle, Elizabeth Demers
2007-12-01
This article summarizes current issues related to invasive and minimally invasive surgical techniques for back pain conditions. It describes pain generators and explains theories about how discs fail. The article discusses techniques for treating painful sciatica, painful motion segments, and spinal stenosis. Problems related to current imaging are also presented. The article concludes with a discussion about physical therapy.
Rigie, David
2014-01-01
We explore the use of the recently proposed "total nuclear variation" (TNV) \\cite{Rigie2014,Holt2014} as a regularizer for reconstructing multi-channel, spectral CT images. This convex penalty is a natural extension of the total variation (TV) to vector-valued images and has the advantage of encouraging common edge locations and a shared gradient direction among image channels. We show how it can be incorporated into a general, data-constrained reconstruction framework and derive update equations based on the first-order, primal-dual algorithm of Chambolle and Pock. Early simulation studies based on the numerical XCAT phantom indicate that the inter-channel coupling introduced by the TNV leads to better preservation of image features at high levels of regularization, compared to independent, channel-by-channel TV reconstructions.
Exploring minimal biotinylation conditions for biosensor analysis using capture chips.
Papalia, Giuseppe; Myszka, David
2010-08-01
Using Biacore's new regenerateable streptavidin capture (CAP) sensor chips, we investigated a number of biotinylation conditions for target ligands. We explored standard amine as well as the less commonly used carboxyl biotinylation methods. We illustrate the time scales required for efficient biotinylation as well as the hazards of overbiotinylation. We evaluated a range of desalting methods, including spin columns, dialysis membranes, and filters. Finally, we tested the effects of common buffer components, such as Tris and glycerol, on the biotinylation process. Together, our results serve as a general guide of the steps to consider when minimally biotinylating a target ligand.
Energy Technology Data Exchange (ETDEWEB)
Nunez, Dario; Zavala, Jesus; Nellen, Lukas; Sussman, Roberto A [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (ICN-UNAM), AP 70-543, Mexico 04510 DF (Mexico); Cabral-Rosetti, Luis G [Departamento de Posgrado, Centro Interdisciplinario de Investigacion y Docencia en Educacion Tecnica (CIIDET), Avenida Universidad 282 Pte., Col. Centro, Apartado Postal 752, C. P. 76000, Santiago de Queretaro, Qro. (Mexico); Mondragon, Myriam, E-mail: nunez@nucleares.unam.mx, E-mail: jzavala@nucleares.unam.mx, E-mail: jzavala@shao.ac.cn, E-mail: lukas@nucleares.unam.mx, E-mail: sussman@nucleares.unam.mx, E-mail: lgcabral@ciidet.edu.mx, E-mail: myriam@fisica.unam.mx [Instituto de Fisica, Universidad Nacional Autonoma de Mexico (IF-UNAM), Apartado Postal 20-364, 01000 Mexico DF (Mexico); Collaboration: For the Instituto Avanzado de Cosmologia, IAC
2008-05-15
We derive an expression for the entropy of a dark matter halo described using a Navarro-Frenk-White model with a core. The comparison of this entropy with that of dark matter in the freeze-out era allows us to constrain the parameter space in mSUGRA models. Moreover, combining these constraints with the ones obtained from the usual abundance criterion and demanding that these criteria be consistent with the 2{sigma} bounds for the abundance of dark matter: 0.112{<=}{Omega}{sub DM}h{sup 2}{<=}0.122, we are able to clearly identify validity regions among the values of tan{beta}, which is one of the parameters of the mSUGRA model. We found that for the regions of the parameter space explored, small values of tan{beta} are not favored; only for tan {beta} Asymptotically-Equal-To 50 are the two criteria significantly consistent. In the region where the two criteria are consistent we also found a lower bound for the neutralino mass, m{sub {chi}}{>=}141 GeV.
Maximum Entropy and Probability Kinematics Constrained by Conditionals
Directory of Open Access Journals (Sweden)
Stefan Lukits
2015-03-01
Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.
Modeling frictional melt injection to constrain coseismic physical conditions
Sawyer, William J.; Resor, Phillip G.
2017-07-01
Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress drop, as well as slip weakening distance and wall rock stiffness. These studies have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a pressurized crack model, an analytical approximation of injection vein formation based on dike intrusion, we find that the timescales of quenching and flow propagation may be similar for a subset of injection veins compiled from the Asbestos Mountain Fault, USA, Gole Larghe Fault Zone, Italy, and the Fort Foster Brittle Zone, USA under minimum melt temperature conditions. 34% of the veins are found to be flow limited, with a final geometry that may reflect cooling of the vein before it reaches an elastic equilibrium with the wall rock. Formation of these veins is a dynamic process whose behavior is not fully captured by the analytical approach. To assess the applicability of simplifying assumptions of the pressurized crack we employ a time-dependent finite-element model of injection vein formation that couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. This finite element model reveals that two basic assumptions of the pressurized crack model, self-similar growth and a uniform pressure gradient, are false. The pressurized crack model thus underestimates flow propagation time by 2-3 orders of magnitude. Flow limiting may therefore occur under a wider range of conditions than previously thought. Flow-limited veins may be recognizable in the field where veins have tapered profiles or smaller aspect ratios than expected. The occurrence and
Proton Decay and Cosmology Strongly Constrain the Minimal SU(5) Supergravity Model
Lopez, Jorge L.; Pois, H.
1993-01-01
We present the results of an extensive exploration of the five-dimensional parameter space of the minimal $SU(5)$ supergravity model, including the constraints of a long enough proton lifetime ($\\tau_p>1\\times10^{32}\\y$) and a small enough neutralino cosmological relic density ($\\Omega_\\chi h^2_0\\le1$). We find that the combined effect of these two constraints is quite severe, although still leaving a small region of parameter space with $m_{\\tilde g,\\tilde q}<1\\TeV$. The allowed values of the proton lifetime extend up to $\\tau_p\\approx1\\times10^{33}\\y$ and should be fully explored by the SuperKamiokande experiment. The proton lifetime cut also entails the following mass correlations and bounds: $m_h\\lsim100\\GeV$, $m_\\chi\\approx{1\\over2}m_{\\chi^0_2}\\approx0.15\\gluino$, $m_{\\chi^0_2}\\approx m_{\\chi^+_1}$, and $m_\\chi<85\\,(115)\\GeV$, $m_{\\chi^0_2,\\chi^+_1}<165\\,(225)\\GeV$ for $\\alpha_3=0.113\\,(0.120)$. Finally, the {\\it combined} proton decay and cosmology constraints predict that if $m_h\\gsim75\\,(80)\\...
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Institute of Scientific and Technical Information of China (English)
De-tong Zhu
2009-01-01
In this paper we extend and improve the classical affine scaling interior-point Newton method for solving nonlinear optimization subject to linear inequality constraints in the absence of the strict complementar-ity assumption. Introducing a computationally efficient technique and employing an identification function for the definition of the new affine scaling matrix, we propose and analyze a new affine scaling interior-point Newton method which improves the Coleman and Li affine scaling matrix in [2] for solving the linear inequality con-strained optimization. Local superlinear and quadratical convergence of the proposed algorithm is established under the strong second order sufficiency condition without assuming strict complementarity of the solution.
Institute of Scientific and Technical Information of China (English)
Chang-yin Zhou; Guo-ping He; Yong-li Wang
2006-01-01
In this paper,we propose a feasible QP-free method for solving nonlinear inequality constrained optimization problems. A new working set is proposed to estimate the active set. Specially,to determine the working set,the new method makes use of the multiplier information from the previous iteration,eliminating the need to compute a multiplier function. At each iteration,two or three reduced symmetric systems of linear equations with a common coefficient matrix involving only constraints in the working set are solved,and when the iterate is sufficiently close to a KKT point,only two of them are involved.Moreover,the new algorithm is proved to be globally convergent to a KKT point under mild conditions. Without assuming the strict complementarity,the convergence rate is superlinear under a condition weaker than the strong second-order sufficiency condition. Numerical experiments illustrate the efficiency of the algorithm.
Efficient Constrained Regret Minimization
Mahdavi, Mehrdad; Jin, Rong
2012-01-01
Online learning constitutes a mathematical framework to analyze sequential decision making problems in adversarial environments. The learner repeatedly chooses an action, the environment responds with an outcome, and then the learner receives a reward for the played action. The goal of the learner is to maximize his total reward. However, there are situations in which, in addition to maximizing the cumulative reward, there are some additional constraints/goals on the sequence of decisions that must be satisfied by the learner. For example, in \\textit{online marketing}, simultaneously maximizing the cumulative reward and the number of buyers to take advantage of word-of-mouth advertising for future marketing seems to be a more ambitious goal than only maximizing cumulative reward. As another example, learning from costly expert advice captures more realistic settings than the original setting in applications such as routing in networks with power constraint. In this paper we study an extension to the online le...
Stegeman, Alwin; De Almeida, Andre L. F.
2009-01-01
In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that
Cotter, Simon L.
2016-10-01
Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the "fast" and "slow" variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.
Initial Condition of Relic Gravitational Waves Constrained by LIGO S6 and Multiple Interferometers
Chen, Jie-Wen; Zhao, Wen; Tong, Ming-Lei
2014-01-01
The relic gravitational wave (RGW) generated during the inflation depends on the initial condition via the amplitude, the spectral index $n_t$ and the running index $\\alpha_t$. CMB observations so far have only constrained the tensor-scalar ratio $r$, but not $n_t$ nor $\\alpha_t$. Complementary to this, the ground-based interferometric detectors working at $\\sim 10^2$Hz are able to constrain the spectral indices that influence the spectrum sensitively at high frequencies. In this work we give a proper normalization of the analytical spectrum at the low frequency end, yielding a modification by a factor of $\\sim 1/50$ to the previous treatment. We calculate the signal-noise ratios (SNR) for various ($n_t,\\alpha_t$) at fixed $r=0.2$ by S6 of LIGO H-L, and obtain the observational upper limit on the running index $\\alpha_t0.01364$.
Karagiannakis, N; Pallis, C
2015-01-01
We analyze the parametric space of the constrained minimal supersymmetric standard model with mu>0 supplemented by a generalized asymptotic Yukawa coupling quasi-unification condition which yields acceptable masses for the fermions of the third family. We impose constraints from the cold dark matter abundance in the universe and its direct detection experiments, the B-physics, as well as the masses of the sparticles and the lightest neutral CP-even Higgs boson. Fixing the mass of the latter to its central value from the LHC and taking 40<=tanbeta<=50, we find a relatively wide allowed parameter space with -11<=A_0/M_{1/2}<=15 and mass of the lightest sparticle in the range (0.09-1.1) TeV. This sparticle is possibly detectable by the present cold dark matter direct search experiments. The required fine-tuning for the electroweak symmetry breaking is much milder than the one needed in the neutralino-stau coannihilation region of the same model.
Flemming, Jens; Hofmann, Bernd
2011-08-01
In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances.
A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
WANG Shaoyu
2016-12-01
Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.
How non-zero initial conditions affect the minimality of linear discrete-time systems
Willigenburg, van L.G.; Koning, de W.L.
2008-01-01
From the state-space approach to linear systems, promoted by Kalman, we learned that minimality is equivalent with reachability together with observability. Our past research on optimal reduced-order LQG controller synthesis revealed that if the initial conditions are non-zero, minimality is no long
THE MINIMAL PROPERTY OF THE CONDITION NUMBER OF INVERTIBLE LINEAR BOUNDED OPERATORS IN BANACH SPACES
Institute of Scientific and Technical Information of China (English)
陈果良; 魏木生
2002-01-01
In this paper we show that in error estimates, the condition number κ(T) of any invertible linear bounded operator T in Banach spaces is minimal. We also extend the Hahn-Banach theorem and other related results.
Constraining Eruptive Conditions From Lava Flow Morphometry: A Case Study With Field Evidence
Bowles, Z. R.; Clarke, A.; Greeley, R.
2007-12-01
Volcanism is widely recognized as one of the primary factors affecting the surfaces of solid planets and satellites throughout the solar system. Basaltic lava is thought to be the most common composition based on observed features typical of basaltic eruptions found on Earth. Lava flows are one of the most easily recognizable landforms on planetary surfaces and their features may provide information about eruption dynamics, lava rheology, and potential hazards. More recently, researchers have taken a multi-faceted approach to combine remote sensing, field observations and quantitative modeling to constrain volcanic activity on Earth and other planets. Here we test a number of published models, including empirically derived relationships from Mt. Etna and Kilauea, models derived from laboratory experiments, and theoretical models previously applied to remote sensing of planetary surfaces, against well-documented eruptions from the literature and field observations. We find that the Graetz (Hulme and Felder, 1977, Phil.Trans., 285, 227 - 234) method for estimating effusion rates compares favorably with published eruption data, while, on the other hand, inverting lava flow length prediction models to estimate effusion rates leads to several orders of magnitude in error. The Graetz method also better constrains eruption duration. Simple radial spreading laws predict Hawaiian lava flow lengths quite well, as do using the thickness of the lava flow front and chilled crust. There was no observed difference between results from models thought to be exclusive to aa or pahoehoe flow fields. Interpreting historic conditions should therefore follow simple relationships to observable morphologies no matter the composition or surface texture. We have applied the most robust models to understand the eruptive conditions and lava rheology of the Batamote Mountains near Ajo, AZ, an eroded shield volcano in southern Arizona. We find effusion rates on the order of 100 - 200 cubic
Institute of Scientific and Technical Information of China (English)
SU BaiLi; LI ShaoYuan; ZHU QuanMin
2009-01-01
Stabilization of the constrained switched nonlinear systems is an attractive research subject. Predictive control can handle variable constraints well and make the system stable. Its stability is typically based on an assumption of initial feasibility of the optimization problem; however the set of initial conditions, starting from where a given predictive formulation is guaranteed to be feasible, is not explicitly char-acterized. In this paper, a hybrid predictive control method is proposed for a class of switched nonlin-ear systems with input constraints and un-measurable states. The main idea is to design a mixed con-troller using Lyapunov functions and a state observer, which switches appropriately between a bounded feedback controller and a predictive controller, and to give an explicitly characterized set of initial conditions to stabilize each closed-loop subsystem. For the whole switched nonlinear system, a suitable switched law based on the state estimation is designed to orchestrate the transitions between the consistituent modes and their respective controllers, and to ensure the whole closed-loop system's stability. The simulation results for a chemical process show the validity of the controller proposed in this paper.
Brannon, Sean; Kankelborg, Charles
2017-08-01
Coronal jets typically appear as thin, collimated structures in EUV and X-ray wavelengths, and are understood to be initiated by magnetic reconnection in the lower corona or upper chromosphere. Plasma that is heated and accelerated upward into coronal jets may therefore carry indirect information on conditions in the reconnection region and current sheet located at the jet base. On 2017 October 14, the Interface Region Imaging Spectrograph (IRIS) and Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) observed a series of jet eruptions originating from NOAA AR 12599. The jet structure has a length-to-width ratio that exceeds 50, and remains remarkably straight throughout its evolution. Several times during the observation bright blobs of plasma are seen to erupt upward, ascending and subsequently descending along the structure. These blobs are cotemporal with footpoint and arcade brightenings, which we believe indicates multiple episodes of reconnection at the structure base. Through imaging and spectroscopic analysis of jet and footpoint plasma we determine a number of properties, including the line-of-sight inclination, the temperature and density structure, and lift-off velocities and accelerations of jet eruptions. We use these properties to constrain the geometry of the jet structure and conditions in reconnection region.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Stabilization of the constrained switched nonlinear systems is an attractive research subject. Predictive control can handle variable constraints well and make the system stable. Its stability is typically based on an assumption of initial feasibility of the optimization problem; however the set of initial conditions, starting from where a given predictive formulation is guaranteed to be feasible, is not explicitly characterized. In this paper, a hybrid predictive control method is proposed for a class of switched nonlinear systems with input constraints and un-measurable states. The main idea is to design a mixed controller using Lyapunov functions and a state observer, which switches appropriately between a bounded feedback controller and a predictive controller, and to give an explicitly characterized set of initial conditions to stabilize each closed-loop subsystem. For the whole switched nonlinear system, a suitable switched law based on the state estimation is designed to orchestrate the transitions between the consistituent modes and their respective controllers, and to ensure the whole closed-loop system’s stability. The simulation results for a chemical process show the validity of the controller proposed in this paper.
Tong, Ming-Lei; Zhao, Wen; Liu, Jin-Zhong; Zhao, Cheng-Shi; Yang, Ting-Gao
2013-01-01
In the non-standard model of relic gravitational waves (RGWs) generated in the early universe, the theoretical spectrum of is mainly described by an amplitude $r$ and a spectral index $\\beta$, the latter usually being determined by the slope of the inflation potential. Pulsar timing arrays (PTAs) data have imposed constraints on the amplitude of strain spectrum for a power-law form as a phenomenological model. Applying these constraints to a generic, theoretical spectrum with $r$ and $\\beta$ as independent parameters, we convert the PTAs constraint into an upper bound on the index $\\beta$, which turns out to be less stringent than those upper bounds from BBN, CMB, and LIGO/VIRGO, respectively. Moreover, it is found that PTAs constrain the non-standard RGWs more stringent than the standard RGWs. If the condition of the quantum normalization is imposed upon a theoretical spectrum of RGWs, $r$ and $\\beta$ become related. With this condition, a minimum requirement of the horizon size during inflation is greater t...
Huang, Kuo-Chan; Tsai, Mu-Jung; Lu, Sin-Ji; Hung, Chun-Hao
2016-01-01
Composite cloud services based on the methodologies of Software as a Service and Service-Oriented Architecture are transforming how people develop and use software. Cloud service providers are confronting the service selection problem when composing composite cloud services. This paper deals with an important type of service selection problem, minimizing the total cost of providing a composite cloud service with respect to the constraints of service level agreement (SLA). Two types of SLA are considered in the study: per-request-based SLA and ratio-based SLA. We present three service selection approaches for dynamic cloud environments where services' performance might vary with time. The first two are iterative compound approaches for per-request-based SLA and the third approach is a one-step method for ratio-based SLA based on the Chebyshev's theorem and nonlinear programming. Experimental results show that our approaches outperform the previous method significantly in terms of total cost reduction.
Rahmouni, A.; Beidouri, Z.; Benamar, R.
2013-09-01
The purpose of the present paper was the development of a physically discrete model for geometrically nonlinear free transverse constrained vibrations of beams, which may replace, if sufficient degrees of freedom are used, the previously developed continuous nonlinear beam constrained vibration models. The discrete model proposed is an N-Degrees of Freedom (N-dof) system made of N masses placed at the ends of solid bars connected by torsional springs, presenting the beam flexural rigidity. The large transverse displacements of the bar ends induce a variation in their lengths giving rise to axial forces modelled by longitudinal springs. The calculations made allowed application of the semi-analytical model developed previously for nonlinear structural vibration involving three tensors, namely the mass tensor mij, the linear rigidity tensor kij and the nonlinearity tensor bijkl. By application of Hamilton's principle and spectral analysis, the nonlinear vibration problem is reduced to a nonlinear algebraic system, examined for increasing numbers of dof. The results obtained by the physically discrete model showed a good agreement and a quick convergence to the equivalent continuous beam model, for various fixed boundary conditions, for both the linear frequencies and the nonlinear backbone curves, and also for the corresponding mode shapes. The model, validated here for the simply supported and clamped ends, may be used in further works to present the flexural linear and nonlinear constrained vibrations of beams with various types of discontinuities in the mass or in the elasticity distributions. The development of an adequate discrete model including the effect of the axial strains induced by large displacement amplitudes, which is predominant in geometrically nonlinear transverse constrained vibrations of beams [1]. The investigation of the results such a discrete model may lead to in the case of nonlinear free vibrations. The development of the analogy between the
Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions
Morange-Majoux, Francoise; Dellatolas, Georges
2010-01-01
Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…
Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions
Morange-Majoux, Francoise; Dellatolas, Georges
2010-01-01
Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…
Necessary and Sufficient Conditions of Solution Uniqueness in l(sub 1) Minimization (Preprint)
2012-08-01
arXiv.org/abs/1103.2897, 2011. 21. R. Tibshirani, “ Regression shrinkage and selection via the lasso ,” J. R. Statist. Soc. B, vol. 58, no. 1, pp...only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as...ways to recognize unique solutions and verify the uniqueness conditions numerically. Keywords `1 minimization ∙ basis pursuit ∙ LASSO ∙ solution
Directory of Open Access Journals (Sweden)
Bernaba M
2014-11-01
Full Text Available Mario Bernaba, Kevin A Johnson, Jiang-Ti Kong, Sean MackeyStanford Systems Neuroscience and Pain Laboratory, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USAPurpose: Conditioned pain modulation (CPM is an experimental approach for probing endogenous analgesia by which one painful stimulus (the conditioning stimulus may inhibit the perceived pain of a subsequent stimulus (the test stimulus. Animal studies suggest that CPM is mediated by a spino–bulbo–spinal loop using objective measures such as neuronal firing. In humans, pain ratings are often used as the end point. Because pain self-reports are subject to cognitive influences, we tested whether cognitive factors would impact on CPM results in healthy humans.Methods: We conducted a within-subject, crossover study of healthy adults to determine the extent to which CPM is affected by 1 threatening and reassuring evaluation and 2 imagery alone of a cold conditioning stimulus. We used a heat stimulus individualized to 5/10 on a visual analog scale as the testing stimulus and computed the magnitude of CPM by subtracting the postconditioning rating from the baseline pain rating of the heat stimulus.Results: We found that although evaluation can increase the pain rating of the conditioning stimulus, it did not significantly alter the magnitude of CPM. We also found that imagery of cold pain alone did not result in statistically significant CPM effect.Conclusion: Our results suggest that CPM is primarily dependent on sensory input, and that the cortical processes of evaluation and imagery have little impact on CPM. These findings lend support for CPM as a useful tool for probing endogenous analgesia through subcortical mechanisms.Keywords: conditioned pain modulation, endogenous analgesia, evaluation, imagery, cold presser test, CHEPS, contact heat-evoked potential stimulator
Non-minimal coupling of torsion-matter satisfying null energy condition for wormhole solutions
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Rani, Shamaila [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-12-15
We explore wormhole solutions in a non-minimal torsion-matter coupled gravity by taking an explicit non-minimal coupling between the matter Lagrangian density and an arbitrary function of the torsion scalar. This coupling describes the transfer of energy and momentum between matter and torsion scalar terms. The violation of the null energy condition occurred through an effective energy-momentum tensor incorporating the torsion-matter non-minimal coupling, while normal matter is responsible for supporting the respective wormhole geometries. We consider the energy density in the form of non-monotonically decreasing function along with two types of models. The first model is analogous to the curvature-matter coupling scenario, that is, the torsion scalar with T-matter coupling, while the second one involves a quadratic torsion term. In both cases, we obtain wormhole solutions satisfying the null energy condition. Also, we find that the increasing value of the coupling constant minimizes or vanishes on the violation of the null energy condition through matter. (orig.)
Dupuy, Nicolas; Bouaouli, Samira; Mauri, Francesco; Sorella, Sandro; Casula, Michele
2015-06-01
We study the ionization energy, electron affinity, and the π → π∗ (1La) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO → LUMO one, which underlies the 1La excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in π-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral 1La excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
Dupuy, Nicolas; Bouaouli, Samira; Mauri, Francesco; Sorella, Sandro; Casula, Michele
2015-06-07
We study the ionization energy, electron affinity, and the π → π(∗) ((1)La) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO → LUMO one, which underlies the (1)La excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in π-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral (1)La excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
Energy Technology Data Exchange (ETDEWEB)
Dupuy, Nicolas, E-mail: nicolas.dupuy@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, case 115, 4 place Jussieu, 75252 Paris Cedex 05 (France); Bouaouli, Samira, E-mail: samira.bouaouli@lct.jussieu.fr [Laboratoire de Chimie Théorique, Université Pierre et Marie Curie, case 115, 4 place Jussieu, 75252 Paris Cedex 05 (France); Mauri, Francesco, E-mail: francesco.mauri@impmc.upmc.fr; Casula, Michele, E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, case 115, 4 place Jussieu, 75252 Paris Cedex 05 (France); Sorella, Sandro, E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy)
2015-06-07
We study the ionization energy, electron affinity, and the π → π{sup ∗} ({sup 1}L{sub a}) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO → LUMO one, which underlies the {sup 1}L{sub a} excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in π-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral {sup 1}L{sub a} excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
Directory of Open Access Journals (Sweden)
Angela Hsiang-Ling Chen
2016-09-01
Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.
Experiments to constrain the garnet-talc join for metapelitic material at eclogite-facies conditions
Chmielowski, Reia M.; Poli, Stefano; Fumagalli, Patrizia
2010-05-01
Increasing pressure due to the subduction of mica-dominated sediments results in a loss of biotite as garnet-talc becomes a stable assemblage. While this transition is observed in natural samples, it has not yet been well constrained experimentally. Previous experimental investigations into metapelitic compositions at the University of Milan (Poli and Schmidt 2002, Ferri et al., 2009) indicated that further work in the range of 600-700° C, 2-3 GPa was required to elucidate this tie-line transition. The assemblages leading to garnet-talc stability through tie-line flip reactions include biotite-chlorite, biotite-chloritoid, and biotite-kyanite. Furthermore the mutual stability of garnet-chlorite and chloritoid-biotite at relatively high pressure conditions below the garnet-talc field is reevaluated. Current investigations on two synthetic compositions (NM, NP) in the model metapelitic system CaO-K2O-FeO-MgO-Al2O3-SiO2-H2O are carried out in a piston cylinder apparatus at pressures and temperatures up to 2.7 GPa and to 740°C. Experiments are buffered with graphite, and are generally run under fluid saturated conditions. Two capsules, one of each composition, are included within the pressure chamber for each experiment. The NM composition is representative of metapelites and the NP composition is representative of metagreywackes. Experiments are characterized by XRD, BSE images and EMPA. The following summary includes both current investigations and the above mentioned previous work, undertaken on the same chemical compositions. All assemblages also contain quartz, white mica, fluid ± zoisite or lawsonite. The assemblage garnet-chlorite-chloritoid ± staurolite is present at 500° C at pressures of 1.4 and 1.6 GPa. The assemblage biotite-staurolite-chlorite is present at 600° C, 1.2 GPa and at 625° C, 1.4 GPa. The assemblage biotite-chloritoid-chlorite is present at 600° C for pressures ≥ 1.3 GPa and ≤ 1.7 GPa. The assemblage garnet-chloritoid-biotite is
Analyses of an air conditioning system with entropy generation minimization and entransy theory
Yan-Qiu, Wu; Li, Cai; Hong-Juan, Wu
2016-06-01
In this paper, based on the generalized heat transfer law, an air conditioning system is analyzed with the entropy generation minimization and the entransy theory. Taking the coefficient of performance (denoted as COP) and heat flow rate Q out which is released into the room as the optimization objectives, we discuss the applicabilities of the entropy generation minimization and entransy theory to the optimizations. Five numerical cases are presented. Combining the numerical results and theoretical analyses, we can conclude that the optimization applicabilities of the two theories are conditional. If Q out is the optimization objective, larger entransy increase rate always leads to larger Q out, while smaller entropy generation rate does not. If we take COP as the optimization objective, neither the entropy generation minimization nor the concept of entransy increase is always applicable. Furthermore, we find that the concept of entransy dissipation is not applicable for the discussed cases. Project supported by the Youth Programs of Chongqing Three Gorges University, China (Grant No. 13QN18).
Nur, Rusdi; Suyuti, Muhammad Arsyad; Susanto, Tri Agus
2017-06-01
Aluminum is widely utilized in the industrial sector. There are several advantages of aluminum, i.e. good flexibility and formability, high corrosion resistance and electrical conductivity, and high heat. Despite of these characteristics, however, pure aluminum is rarely used because of its lacks of strength. Thus, most of the aluminum used in the industrial sectors was in the form of alloy form. Sustainable machining can be considered to link with the transformation of input materials and energy/power demand into finished goods. Machining processes are responsible for environmental effects accepting to their power consumption. The cutting conditions have been optimized to minimize the cutting power, which is the power consumed for cutting. This paper presents an experimental study of sustainable machining of Al-11%Si base alloy that was operated without any cooling system to assess the capacity in reducing power consumption. The cutting force was measured and the cutting power was calculated. Both of cutting force and cutting power were analyzed and modeled by using the central composite design (CCD). The result of this study indicated that the cutting speed has an effect on machining performance and that optimum cutting conditions have to be determined, while sustainable machining can be followed in terms of minimizing power consumption and cutting force. The model developed from this study can be used for evaluation process and optimization to determine optimal cutting conditions for the performance of the whole process.
Minimizing the ILL-conditioning in the analysis by gamma radiation
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Halisson Alberdan C.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Moreira, Icaro Valgueiro M., E-mail: hacc@cin.ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: rmas@cin.ufpe.br, E-mail: ivmm@cin.ufpe.br, E-mail: ccd@ufpe.br, E-mail: eal@cin.ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker, E-mail: lker.Meric@ift.uib.no [University Of Bergen (Norway)
2015-07-01
A non-invasive method which can be employed for elemental analysis is the Prompt-Gamma Neutron Activation Analysis. The aim is to estimate the mass fractions of the different constituent elements present in the unknown sample basing its estimations on the energies of all the photopeaks in their spectra. Two difficulties arise in this approach: the constituents are unknown, and the composed spectrum of the unknown sample is a nonlinear combination of the spectra of its constituents (which are called libraries). An iterative method that has become popular is the Monte Carlo Library Least Squares. One limitation with this method is that the amount of noise present in the spectra is not negligible, and the magnitude differences in the photon counting produce a bad conditioning in the covariance matrix employed by the least squares method, affecting the numerical stability of the method. A method for minimizing the numerical instability provoked by noisy spectra is proposed. Corresponding parts of different spectra are selected as to minimize the condition number of the resulting covariance matrix. This idea is supported by the assumption that the unknown spectrum is a linear combination of its constituent's spectra, and the fact that the amount of constituents is so small (typically ve of them). The selection of spectrum parts is done through Greedy Randomized Adaptive Search Procedures, where the cost function is the condition number that derives from the covariance matrix produced out of the selected parts. A QR factorization is also applied to the nal covariance matrix to reduce further its condition number, and transferring part of its bad conditioning to the basis conversion matrix. (author)
Directory of Open Access Journals (Sweden)
Aaron L. Leppin
2015-01-01
Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.
Leppin, Aaron L; Montori, Victor M; Gionfriddo, Michael R
2015-01-29
An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM) is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients' lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.
Yurkov, Andrey M; Röhl, Oliver; Pontes, Ana; Carvalho, Cláudia; Maldonado, Cristina; Sampaio, José Paulo
2016-02-01
Soil yeasts represent a poorly known fraction of the soil microbiome due to limited ecological surveys. Here, we provide the first comprehensive inventory of cultivable soil yeasts in a Mediterranean ecosystem, which is the leading biodiversity hotspot for vascular plants and vertebrates in Europe. We isolated and identified soil yeasts from forested sites of Serra da Arrábida Natural Park (Portugal), representing the Mediterranean forests, woodlands and scrub biome. Both cultivation experiments and the subsequent species richness estimations suggest the highest species richness values reported to date, resulting in a total of 57 and 80 yeast taxa, respectively. These values far exceed those reported for other forest soils in Europe. Furthermore, we assessed the response of yeast diversity to microclimatic environmental factors in biotopes composed of the same plant species but showing a gradual change from humid broadleaf forests to dry maquis. We observed that forest properties constrained by precipitation level had strong impact on yeast diversity and on community structure and lower precipitation resulted in an increased number of rare species and decreased evenness values. In conclusion, the structure of soil yeast communities mirrors the environmental factors that affect aboveground phytocenoses, aboveground biomass and plant projective cover.
Kosmidis, Kosmas; Karalis, Vangelis; Argyrakis, Panos; Macheras, Panos
2004-09-01
Two different approaches were used to study the kinetics of the enzymatic reaction under heterogeneous conditions to interpret the unusual nonlinear pharmacokinetics of mibefradil. Firstly, a detailed model based on the kinetic differential equations is proposed to study the enzymatic reaction under spatial constraints and in vivo conditions. Secondly, Monte Carlo simulations of the enzyme reaction in a two-dimensional square lattice, placing special emphasis on the input and output of the substrate were applied to mimic in vivo conditions. Both the mathematical model and the Monte Carlo simulations for the enzymatic reaction reproduced the classical Michaelis-Menten (MM) kinetics in homogeneous media and unusual kinetics in fractal media. Based on these findings, a time-dependent version of the classic MM equation was developed for the rate of change of the substrate concentration in disordered media and was successfully used to describe the experimental plasma concentration-time data of mibefradil and derive estimates for the model parameters. The unusual nonlinear pharmacokinetics of mibefradil originates from the heterogeneous conditions in the reaction space of the enzymatic reaction. The modified MM equation can describe the pharmacokinetics of mibefradil as it is able to capture the heterogeneity of the enzymatic reaction in disordered media.
Geometric constrained variational calculus. II: The second variation (Part I)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
Conditions for the Trivers-Willard Hypothesis to be Valid: a Minimal Population-Genetic Model
Indian Academy of Sciences (India)
N. V. Joshi
2000-04-01
The very insightful Trivers-Willard hypothesis, proposed in the early 1970s, states that females in good physiological condition are more likely to produce male offspring when the variance of reproductive success among males is high. The hypothesis has inspired a number of studies over the last three decades aimed at its experimental verification, and many of them have found adequate supportive evidence in its favour. Theoretical investigations, on the other hand, have been few, perhaps because formulating a population-genetic model for describing the Trivers±Willard hypothesis turns out to be surprisingly complex. The present study is aimed at using a minimal population-genetic model to explore one specific scenario, namely how is the preference for a male offspring by females in good condition altered when , the proportion of such females in the population, changes from a low to a high value. As expected, when the proportion of such females in good condition is low in the population, i.e. for low values of , the Trivers-Willard (TW) strategy goes to fixation against the equal investment strategy. This holds true up to $g_\\mathrm{max}$, a critical value of , above which the two strategies coexist, but the proportion of the TW strategy steadily decreases as increases to unity. Similarly, when the effect of well-endowed males attaining disproportionately high number of matings is more pronounced, the TW strategy is more likely to go to fixation. Interestingly, the success of the TW strategy has a complex dependence on the variance of the physiological condition of females. If the difference in the two types of conditions is not large, TW strategy is favoured, and its success is more likely as the difference increases. However, beyond a critical value of the difference, the TW strategy is found to be less and less likely to succeed as the difference becomes larger. Possible reasons for these effects are discussed.
Tanemura, M.; Chida, Y.
2016-09-01
There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.
Constraining the Detailed Balance Condition in Horava Gravity with Cosmic Accelerating Expansion
Chiang, Chien-I; Chen, Pisin
2010-01-01
In 2009 Ho\\v{r}ava proposed a power-counting renormalizable quantum gravity theory. Afterwards a term in the action that softly violates the detailed balance condition has been considered with the attempt of obtaining a more realistic theory in its IR-limit. This term is proportional to $\\omega R^{(3)}$, where $\\omega$ is a constant parameter and $R^{(3)}$ is the spatial Ricci scalar. In this paper we derive constraints on this IR-modified Ho\\v{r}ava theory using the late-time cosmic accelerating expansion observations. We obtain a lower bound of $|\\omega|$ that is nontrivial and depends on $\\Lambda_W$, the cosmological constant of the three dimensional spatial action in the Ho\\v{r}ava gravity. We find that to preserve the detailed balance condition, one needs to fine-tune $\\Lambda_W$ such that $- 2.29\\times 10^{-4}< (c^2 \\Lambda_W)/(H^2_0 \\currentDE) - 2 < 0 $, where $H_0$ and $\\currentDE$ are the Hubble parameter and dark energy density fraction in the present epoch, respectively. On the other hand, i...
Taylor, Richard J. M.; Kirkland, Christopher L.; Clark, Chris
2016-11-01
High-temperature metamorphic rocks are the result of numerous chemical and physical processes that occur during a potentially long-lived thermal evolution. These rocks chart the sequence of events during an orogenic episode including heating, cooling, exhumation and melt interaction, all of which may be interpreted through the elemental and isotopic characteristics of accessory minerals such as zircon, monazite and rutile. Developments in imaging and in situ chemical analysis have resulted in an increasing amount of information being extracted from these accessory phases. The refractory nature of these minerals, combined with both their use as geochronometers and tracers of metamorphic mineral reactions, has made them the focus of many studies of granulite-facies terrains. In such studies the primary aim is often to determine the timing and conditions of the peak of metamorphism, and high-temperature metasedimentary rocks may seem ideal for this purpose. For example pelites typically contain an abundance of accessory minerals in a variety of bulk compositions, are melt-bearing, and may have endured extreme conditions that facilitate diffusion and chemical equilibrium. However complexities arise due to the heterogeneous nature of these rocks on all scales, driven by both the composition of the protolith and metamorphic differentiation. In additional to lithological heterogeneity, the closure temperatures for both radiogenic isotopes and chemical thermometers vary between different accessory minerals. This apparent complexity can be useful as it permits a wide range of temperature and time (T-t) information to be recovered from a single rock sample. In this review we cover: 1) characteristic internal textures of accessory minerals in high temperature rocks; 2) the interpretation of zircon and monazite age data in relation to high temperature processes; 3) rare earth element partitioning; 4) trace element thermometry; 5) the incorporation of accessory mineral growth
Shivak, J. N.; Banerjee, N.; Flemming, R. L.
2013-12-01
We report the results of a comparative study of the crustal environmental conditions recorded by several Martian meteorites (Nakhla, Los Angeles, and Zagami). Though no samples have yet been returned from Mars, numerous meteorites are known and these provide the only samples of the Martian crust currently available for study. Terrestrial basalts and other mafic igneous rocks are analogous in many ways to much of the Martian crust, as evidenced by the composition of known Martian meteorites and measurements from planetary missions [1]. Microorganisms are known to thrive in the terrestrial geosphere and make use of many different substrates within rock in the subsurface of the Earth [2]. The action of aqueous solutions in the Martian crust has been well established through the study of alteration mineral assemblages present in many Martian meteorites, such as the nakhlites [3]. Aqueous activity in terrestrial chemolithoautotrophic habitats provides numerous energy and nutrient sources for microbes [4], suggesting the potential for habitable endolithic environments in Martian rocks. Fayalite in Nakhla has experienced extensive aqueous alteration to reddish-brown 'iddingsite' material within a pervasive fracture system. Textural imaging shows the replacement of primary olivine with various alteration phases and infiltration of this alteration front into host grains. Geochemical analysis of the alteration material shows the addition of iron and silica and removal of magnesium during alteration. Novel In situ Micro-XRD and Raman Spectroscopy of this material reveals a new assemblage consisting of iron oxides, smectite clays, carbonates, and a minor serpentine component. The alteration mineral assemblage here differs from several that have been previously reported [4] [5], allowing for a reevaluation of the environmental conditions during fluid action. Los Angeles and Zagami show no evidence of aqueous activity, though their primary basaltic mineralogies show many
Shanskiy, , Merrit; Vollmer, Elis; Penu, Priit
2015-04-01
restrictions on study sites by nature conversation on the maps data about nature protected objects and buffer zones or forming restricted areas around those objects. The results will indicate the utilization possibility and most sustainable scenarios for different land use cases. Moreover, the possible changes in soil functioning accordingly to site specific soil conditions will be discussed and presented.
Constrained variational calculus: the second variation (part I)
Massa, Enrico; Pagani, Enrico; Luria, Gianvittorio
2010-01-01
This paper is a direct continuation of arXiv:0705.2362 . The Hamiltonian aspects of the theory are further developed. Within the framework provided by the first paper, the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A necessary and sufficient condition for minimality is proved.
Energy Technology Data Exchange (ETDEWEB)
Cristofano, Gerardo; Marotta, Vincenzo [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , and INFN, Sezione di Napoli, Via Cintia, Complesso Universitario M. Sant' Angelo, 80126 Napoli (Italy); Naddeo, Adele [Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno and CNISM, Unita di Ricerca di Salerno, Via Salvador Allende, 84081 Baronissi (Italy)], E-mail: naddeo@sa.infn.it; Niccoli, Giuliano [Theoretical Physics Group, DESY, NotkeStrasse 85, 22603 Hamburg (Germany)
2008-11-17
Recently a one-dimensional closed ladder of Josephson junctions has been studied [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Phys. Lett. A 372 (2008) 2464] within a twisted conformal field theory (CFT) approach [G. Cristofano, G. Maiella, V. Marotta, Mod. Phys. Lett. A 15 (2000) 1679; G. Cristofano, G. Maiella, V. Marotta, G. Niccoli, Nucl. Phys. B 641 (2002) 547] and shown to develop the phenomenon of flux fractionalization [G. Cristofano, V. Marotta, A. Naddeo, G. Niccoli, Eur. Phys. J. B 49 (2006) 83]. That led us to predict the emergence of a topological order in such a system [G. Cristofano, V. Marotta, A. Naddeo, J. Stat. Mech.: Theory Exp. (2005) P03006]. In this Letter we analyze the ground states and the topological properties of fully frustrated Josephson junction arrays (JJA) arranged in a Corbino disk geometry for a variety of boundary conditions. In particular minimal configurations of fully frustrated JJA are considered and shown to exhibit the properties needed in order to build up a solid state qubit, protected from decoherence. The stability and transformation properties of the ground states of the JJA under adiabatic magnetic flux changes are analyzed in detail in order to provide a tool for the manipulation of the proposed qubit.
Kirkpatrick, Barbara; Currier, Robert; Nierenberg, Kate; Reich, Andrew; Backer, Lorraine C; Stumpf, Richard; Fleming, Lora; Kirkpatrick, Gary
2008-08-25
With over 50% of the US population living in coastal counties, the ocean and coastal environments have substantial impacts on coastal communities. While many of the impacts are positive, such as tourism and recreation opportunities, there are also negative impacts, such as exposure to harmful algal blooms (HABs) and water borne pathogens. Recent advances in environmental monitoring and weather prediction may allow us to forecast these potential adverse effects and thus mitigate the negative impact from coastal environmental threats. One example of the need to mitigate adverse environmental impacts occurs on Florida's west coast, which experiences annual blooms, or periods of exuberant growth, of the toxic dinoflagellate, Karenia brevis. K. brevis produces a suite of potent neurotoxins called brevetoxins. Wind and wave action can break up the cells, releasing toxin that can then become part of the marine aerosol or sea spray. Brevetoxins in the aerosol cause respiratory irritation in people who inhale it. In addition, asthmatics who inhale the toxins report increase upper and lower airway symptoms and experience measurable changes in pulmonary function. Real-time reporting of the presence or absence of these toxic aerosols will allow asthmatics and local coastal residents to make informed decisions about their personal exposures, thus adding to their quality of life. A system to protect public health that combines information collected by an Integrated Ocean Observing System (IOOS) has been designed and implemented in Sarasota and Manatee Counties, Florida. This system is based on real-time reports from lifeguards at the eight public beaches. The lifeguards provide periodic subjective reports of the amount of dead fish on the beach, apparent level of respiratory irritation among beach-goers, water color, wind direction, surf condition, and the beach warning flag they are flying. A key component in the design of the observing system was an easy reporting pathway for
Martens, F.M.J.; Heesakkers, J.P.F.A.; Rijkhoff, N.J.M.
2011-01-01
STUDY DESIGN: Experimental. OBJECTIVES: Electrical stimulation of the dorsal genital nerves (DGN) suppresses involuntary detrusor contractions (IDCs) in patients with neurogenic detrusor overactivity (DO). The feasibility of minimal invasive electrode implantation near the DGN and the effectiveness
Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna
2016-11-01
The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep
Eddy Current Minimizing Flow Plug for Use in Flow Conditioning and Flow Metering
England, John Dwight (Inventor); Kelley, Anthony R. (Inventor)
2015-01-01
An eddy-current-minimizing flow plug has open flow channels formed between the plug's inlet and outlet. Each open flow channel includes (i) a first portion that originates at the inlet face and converges to a location within the plug that is downstream of the inlet, and (ii) a second portion that originates within the plug and diverges to the outlet. The diverging second portion is approximately twice the length of the converging first portion. The plug is devoid of planar surface regions at its inlet and outlet, and in fluid flow planes of the plug that are perpendicular to the given direction of a fluid flowing therethrough.
Directory of Open Access Journals (Sweden)
Natália Alves Barbosa
2015-08-01
Full Text Available Storing processed food products can cause alterations in their chemical compositions. Thus, the objective of this study was to evaluate carotenoid retention in the kernels of minimally processed normal and vitamin A precursor (proVA-biofortified green corn ears that were packaged in polystyrene trays covered with commercial film or in multilayered polynylon packaging material and were stored. Throughout the storage period, the carotenoids were extracted from the corn kernels using organic solvents and were quantified using HPLC. A completely factorial design including three factors (cultivar, packaging and storage period was applied for analysis. The green kernels of maize cultivars BRS1030 and BRS4104 exhibited similar carotenoid profiles, with zeaxanthin being the main carotenoid. Higher concentrations of the carotenoids lutein, β-cryptoxanthin, and β-carotene, the total carotenoids and the total vitamin A precursor carotenoids were detected in the green kernels of the biofortified BRS4104 maize. The packaging method did not affect carotenoid retention in the kernels of minimally processed green corn ears during the storage period.
Sanz, S; Olarte, C; Ayala, F; Echávarri, J F
2009-08-01
The effect of different types of lighting (white, green, red, and blue light) on minimally processed asparagus during storage at 4 degrees C was studied. The gas concentrations in the packages, pH, mesophilic counts, and weight loss were also determined. Lighting caused an increase in physiological activity. Asparagus stored under lighting achieved atmospheres with higher CO(2) and lower O(2) content than samples kept in the dark. This activity increase explains the greater deterioration experienced by samples stored under lighting, which clearly affected texture and especially color, accelerating the appearance of greenish hues in the tips and reddish-brown hues in the spears. Exposure to light had a negative effect on the quality parameters of the asparagus and it caused a significant reduction in shelf life. Hence, the 11 d shelf life of samples kept in the dark was reduced to only 3 d in samples kept under red and green light, and to 7 d in those kept under white and blue light. However, quality indicators such as the color of the tips and texture showed significantly better behavior under blue light than with white light, which allows us to state that it is better to use this type of light or blue-tinted packaging film for the display of minimally processed asparagus to consumers.
SH$^c$ Realization of Minimal Model CFT: Triality, Poset and Burge Condition
Fukuda, Masayuki; Matsuo, Yutaka; Zhu, Rui-Dong
2015-01-01
Recently an orthogonal basis of $\\mathcal{W}_N$-algebra (AFLT basis) labeled by $N$-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra $\\mathrm{SH}^{c}$ which is universal for any $N$. It includes an infinite number of commuting operators which are diagonal on the basis. In this paper, we study the level-rank duality between the minimal models from SH$^c$. It is shown that the nonvanishing states in both systems are described by $N$ or $M$ Young diagrams with the rows of boxes appropriately shuffled. The analysis demonstrates that $\\mathrm{SH}^{c}$ has triality symmetry for some specific choices of parameters. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by $\\mathr...
Lucivero, G; Romano, C; Ferraraccio, F; Sellitto, A; De Fanis, U; Giunta, R; Guarino, A; Auriemma, P P; Benincasa, M; Iovino, F
2011-01-01
Breast involvement is a rare event in SLE patients. The most frequent presentation is lupus panniculitis with skin erythema, tenderness, and parenchymal nodules. However, when breast masses are detected in SLE patients without significant superficial inflammation, it is mandatory to rule out breast carcinoma. Here, we report the case of a 47-year-old woman with an 18-year-long history of SLE, who presented with a suspicious breast mass. Since surgical trauma has been reported to be able to exacerbate breast inflammation in lupus mastitis, an ultrasound-guided minimally invasive Mammotome biopsy was performed to obtain tissue samples for histological and immunohistochemical examinations. Histology was consistent with lupus mastitis. The patient was already on mycophenolate mofetil and hydroxychloroquine. At the latest follow-up visit 6 years later, no progression of the breast lesion was observed.
Constrained superfields in supergravity
Energy Technology Data Exchange (ETDEWEB)
Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-02-16
We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.
DEFF Research Database (Denmark)
Lauridsen, M M; Poulsen, L; Rasmussen, C K
2016-01-01
Many chronic medical conditions are accompanied by cognitive disturbances but these have only to a very limited extent been psychometrically quantified. An exception is liver cirrhosis where hepatic encephalopathy is an inherent risk and mild forms are diagnosed by psychometric tests. The preferred....../15, p psychometrically measurable cognitive deficits, whereas those with ESRF or DMII had not. This adds to the understanding of the clinical consequences of chronic heart- and lung disease, and implies...... that the psychometric tests should be interpreted with great caution in cirrhosis patients with heart- or lung comorbidity....
In vitro storage of cedar shoot cultures under minimal growth conditions.
Renau-Morata, Begoña; Arrillaga, Isabel; Segura, Juan
2006-07-01
We developed procedures for slow-growth storage of Cedrus atlantica and Cedrus libani microcuttings of juvenile and adult origin, noting factors favouring the extension of subculture intervals. Microcuttings could be stored effectively up to 6 months at 4 degrees C and reduced light intensity, provided that they were grown on a diluted modified MS medium. The addition of 6% mannitol to the storage media affected negatively survival and multiplication capacity of the cultures. The slow-growth storage conditions used in our experiments did not induce remarkable effects on both RAPD variability and average DNA methylation in the species.
Godon, Christian; Teulon, Jean-Marie; Odorico, Michael; Basset, Christian; Meillan, Matthieu; Vellutini, Luc; Chen, Shu-Wen W; Pellequer, Jean-Luc
2016-12-23
A recurrent interrogation when imaging soft biomolecules using atomic force microscopy (AFM) is the putative deformation of molecules leading to a bias in recording true topographical surfaces. Deformation of biomolecules comes from three sources: sample instability, adsorption to the imaging substrate, and crushing under tip pressure. To disentangle these causes, we measured the maximum height of a well-known biomolecule, the tobacco mosaic virus (TMV), under eight different experimental conditions positing that the maximum height value is a specific indicator of sample deformations. Six basic AFM experimental factors were tested: imaging in air (AIR) versus in liquid (LIQ), imaging with flat minerals (MICA) versus flat organic surfaces (self-assembled monolayers, SAM), and imaging forces with oscillating tapping mode (TAP) versus PeakForce tapping (PFT). The results show that the most critical parameter in accurately measuring the height of TMV in air is the substrate. In a liquid environment, regardless of the substrate, the most critical parameter is the imaging mode. Most importantly, the expected TMV height values were obtained with both imaging with the PeakForce tapping mode either in liquid or in air at the condition of using self-assembled monolayers as substrate. This study unambiguously explains previous poor results of imaging biomolecules on mica in air and suggests alternative methodologies for depositing soft biomolecules on well organized self-assembled monolayers.
Institute of Scientific and Technical Information of China (English)
唐军强
2014-01-01
The conditional extreme values for multivariable functions under equality constrains was investigated by start⁃ing from the method of Lagrange multipliers. The necessary condition for the existence of conditional extreme values was obtained by theory of linear equations. Its application in the theory of optimization was discussed. The optimal solution is obtained with this necessary condition by converting inequality constrains into equality constrains.%从拉格朗日乘子法出发，考虑多元函数在等式约束条件下的极值问题。由线性方程组理论得到多元函数在一个或多个等式约束条件下极值点存在的必要条件。并进一步考虑该条件在优化理论中的应用，通过将不等式约束转化为等式约束，运用等约束条件下极值存在的必要条件获得最优解。
Rasmussen, Jorgen
2011-01-01
We construct new Yang-Baxter integrable boundary conditions in the lattice approach to the logarithmic minimal model WLM(1,p) giving rise to reducible yet indecomposable representations of rank 1 in the continuum scaling limit. We interpret these W-extended Kac representations as finitely-generated W-extended Feigin-Fuchs modules over the triplet W-algebra W(p). The W-extended fusion rules of these representations are inferred from the recently conjectured Virasoro fusion rules of the Kac representations in the underlying logarithmic minimal model LM(1,p). We also introduce the modules contragredient to the W-extended Kac modules and work out the correspondingly-extended fusion algebra. Our results are in accordance with the Kazhdan-Lusztig dual of tensor products of modules over the restricted quantum universal enveloping algebra $\\bar{U}_q(sl_2)$ at $q=e^{\\pi i/p}$. Finally, polynomial fusion rings isomorphic with the various fusion algebras are determined, and the corresponding Grothendieck ring of charact...
Directory of Open Access Journals (Sweden)
André eCyr
2014-07-01
Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.
Xu, Ke; Wu, Qiong; Xie, Yongqiang; Tang, Ming; Fu, Songnian; Liu, Deming
2017-02-20
The 2-μm optical band has gained much attention recently due to its potential applications in optical fiber communication systems. One constraint in this wavelength region is that the electrical bandwidth of components like modulators and photodetectors is limited by the immature manufacturing technologies. Here we experimentally demonstrated the high-speed signal generation and transmission under bandwidth-constrained scenario at 2-μm. It is enabled by the direct-detection optical filter bank multicarrier (FBMC) modulation technique with constant amplitude zero autocorrelation (CAZAC) equalization. We achieved a single wavelength 80 Gbit/s data rate using the 16-QAM FBMC modulation format which is the highest single channel bit rate at 2-μm according to our best knowledge. The signal is transmitted through a 100m-long solid-core fiber designed for single-mode transmission at 2-μm. The measured bit error rates of the signals are below the forward error correction limit of 3.8 × 10-3, and the 100m-fiber transmission brings negligible penalty.
Dvorak, Christopher C.; Horn, Biljana N.; Puck, Jennifer M.; Adams, Stuart; Veys, Paul; Czechowicz, Agnieszka; Cowan, Morton J.
2014-01-01
For infants with severe combined immunodeficiency (SCID) the ideal conditioning regimen before allogeneic hematopoietic cell transplantation (HCT) would omit cytotoxic chemotherapy to minimize short- and long-term complications. We performed a prospective pilot trial with alemtuzumab monotherapy to overcome NK-cell mediated immunologic barriers to engraftment. We enrolled 4 patients who received CD34-selected haploidentical cells, two of whom failed to engraft donor T cells. The 2 patients who engrafted had delayed T cell reconstitution, despite rapid clearance of circulating alemtuzumab. Although well-tolerated, alemtuzumab failed to overcome immunologic barriers to donor engraftment. Furthermore, alemtuzumab may slow T cell development in patients with SCID in the setting of a T-cell depleted graft. PMID:24977928
Dvorak, Christopher C; Horn, Biljana N; Puck, Jennifer M; Czechowicz, Agnieszka; Shizuru, Judy A; Ko, Rose M; Cowan, Morton J
2014-09-01
For infants with SCID, the ideal conditioning regimen before allogeneic HCT would omit cytotoxic chemotherapy to minimize short- and long-term complications. We performed a prospective pilot trial with G-CSF plus plerixafor given to the host to mobilize HSC from their niches. We enrolled six patients who received CD34-selected haploidentical cells and one who received T-replete matched unrelated BM. All patients receiving G-CSF and plerixafor had generally poor CD34(+) cell and Lin(-) CD34(+) CD38(-) CD90(+) CD45RA(-) HSC mobilization, and developed donor T cells, but no donor myeloid or B-cell engraftment. Although well tolerated, G-CSF plus plerixafor alone failed to overcome physical barriers to donor engraftment.
Dvorak, Christopher C; Horn, Biljana N; Puck, Jennifer M; Adams, Stuart; Veys, Paul; Czechowicz, Agnieszka; Cowan, Morton J
2014-09-01
For infants with SCID the ideal conditioning regimen before allogeneic HCT would omit cytotoxic chemotherapy to minimize short- and long-term complications. We performed a prospective pilot trial with alemtuzumab monotherapy to overcome NK-cell mediated immunologic barriers to engraftment. We enrolled four patients who received CD34-selected haploidentical cells, two of whom failed to engraft donor T cells. The two patients who engrafted had delayed T-cell reconstitution, despite rapid clearance of circulating alemtuzumab. Although well-tolerated, alemtuzumab failed to overcome immunologic barriers to donor engraftment. Furthermore, alemtuzumab may slow T-cell development in patients with SCID in the setting of a T-cell depleted graft.
Institute of Scientific and Technical Information of China (English)
SUN Churen
2005-01-01
It is difficult to judge whether a given point is a global maximizer of an unconstrained optimization problem. This paper deals with this problem by considering globa linformation via integral and gives a necessary and sufficient condition judging whether a given point is a global maximizer of an unconstrained optimization problem. An algorithm is offered under such a condition and finally two test problems are verified via the offered algorithm.
Directory of Open Access Journals (Sweden)
HYOUNGJU YOON
2013-02-01
Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.
Energy Technology Data Exchange (ETDEWEB)
Yoon, Hyoung Ju [Dept. of Nuclear Engineering, University of Kyunghee, Seoul (Korea, Republic of)
2013-02-15
It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3 and 4 and UCN 1 and 2. As results, pH of the sump solution for the SKN 3 and 4 was between 7.02 and 7.45, and for the UCN 1 and 2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.
Stone, Jordan M.
In this thesis I discuss probes of small spatial scales around young stars and protostars and around the supermassive black hole at the galactic center. I begin by describing adaptive optics-fed infrared spectroscopic studies of nascent and newborn binary systems. Binary star formation is a significant mode of star formation that could be responsible for the production of a majority of the galactic stellar population. Better characterization of the binary formation mechanism is important for better understanding many facets of astronomy, from proper estimates of the content of unresolved populations, to stellar evolution and feedback, to planet formation. My work revealed episodic accretion onto the more massive component of the pre-main sequence binary system UY Aur. I also showed changes in the accretion onto the less massive component, revealing contradictory indications of the change in accretion rate when considering disk-based and shock-based tracers. I suggested two scenarios to explain the inconsistency. First, increased accretion should alter the disk structure, puffing it up. This change could obscure the accretion shock onto the central star if the disk is highly inclined. Second, if accretion through the disk is impeded before it makes it all the way onto the central star, then increased disk tracers of accretion would not be accompanied by increased shock tracers. In this case mass must be piling up at some radius in the disk, possibly supplying the material for planet formation or a future burst of accretion. My next project focused on characterizing the atmospheres of very low-mass companions to nearby young stars. Whether these objects form in an extension of the binary-star formation mechanism to very low masses or they form via a different process is an open question. Different accretion histories should result in different atmospheric composition, which can be constrained with spectroscopy. I showed that 3--4mum spectra of a sample of these
DEFF Research Database (Denmark)
Frandsen, Mads Toudal
2007-01-01
I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the under...... the underlying Technicolor theory. The constrained effective Lagrangian allows for an inverted vector vs. axial-vector mass spectrum in a large part of the parameter space....
Jacox, Michael G.; Hazen, Elliott L.; Bograd, Steven J.
2016-06-01
In Eastern Boundary Current systems, wind-driven upwelling drives nutrient-rich water to the ocean surface, making these regions among the most productive on Earth. Regulation of productivity by changing wind and/or nutrient conditions can dramatically impact ecosystem functioning, though the mechanisms are not well understood beyond broad-scale relationships. Here, we explore bottom-up controls during the California Current System (CCS) upwelling season by quantifying the dependence of phytoplankton biomass (as indicated by satellite chlorophyll estimates) on two key environmental parameters: subsurface nitrate concentration and surface wind stress. In general, moderate winds and high nitrate concentrations yield maximal biomass near shore, while offshore biomass is positively correlated with subsurface nitrate concentration. However, due to nonlinear interactions between the influences of wind and nitrate, bottom-up control of phytoplankton cannot be described by either one alone, nor by a combined metric such as nitrate flux. We quantify optimal environmental conditions for phytoplankton, defined as the wind/nitrate space that maximizes chlorophyll concentration, and present a framework for evaluating ecosystem change relative to environmental drivers. The utility of this framework is demonstrated by (i) elucidating anomalous CCS responses in 1998-1999, 2002, and 2005, and (ii) providing a basis for assessing potential biological impacts of projected climate change.
Jacox, Michael G; Hazen, Elliott L; Bograd, Steven J
2016-06-09
In Eastern Boundary Current systems, wind-driven upwelling drives nutrient-rich water to the ocean surface, making these regions among the most productive on Earth. Regulation of productivity by changing wind and/or nutrient conditions can dramatically impact ecosystem functioning, though the mechanisms are not well understood beyond broad-scale relationships. Here, we explore bottom-up controls during the California Current System (CCS) upwelling season by quantifying the dependence of phytoplankton biomass (as indicated by satellite chlorophyll estimates) on two key environmental parameters: subsurface nitrate concentration and surface wind stress. In general, moderate winds and high nitrate concentrations yield maximal biomass near shore, while offshore biomass is positively correlated with subsurface nitrate concentration. However, due to nonlinear interactions between the influences of wind and nitrate, bottom-up control of phytoplankton cannot be described by either one alone, nor by a combined metric such as nitrate flux. We quantify optimal environmental conditions for phytoplankton, defined as the wind/nitrate space that maximizes chlorophyll concentration, and present a framework for evaluating ecosystem change relative to environmental drivers. The utility of this framework is demonstrated by (i) elucidating anomalous CCS responses in 1998-1999, 2002, and 2005, and (ii) providing a basis for assessing potential biological impacts of projected climate change.
Directory of Open Access Journals (Sweden)
Thadeous J Kacmarczyk
Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J
2015-12-01
Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures.
Directory of Open Access Journals (Sweden)
Anastasia Ulicheva
2015-12-01
Full Text Available Background. A word whose body is pronounced in different ways in different words is body-inconsistent. When we take the unit that precedes the vowel into account for the calculation of body-consistency, the proportion of English words that are body-inconsistent is considerably reduced at the level of corpus analysis, prompting the question of whether humans actually use such head/onset-conditioning when they read.Methods. Four metrics for head/onset-constrained body-consistency were calculated: by the last grapheme of the head, by the last phoneme of the onset, by place and manner of articulation of the last phoneme of the onset, and by manner of articulation of the last phoneme of the onset. Since these were highly correlated, principal component analysis was performed on them.Results. Two out of four resulting principal components explained significant variance in the reading-aloud reaction times, beyond regularity and body-consistency.Discussion. Humans read head/onset-conditioned words faster than would be predicted based on their body-consistency and regularity only. We conclude that humans are sensitive to the dependency between word-beginnings and word-ends when they read aloud, and that this dependency is phonological in nature, rather than orthographic.
Minimal Pairs: Minimal Importance?
Brown, Adam
1995-01-01
This article argues that minimal pairs do not merit as much attention as they receive in pronunciation instruction. There are other aspects of pronunciation that are of greater importance, and there are other ways of teaching vowel and consonant pronunciation. (13 references) (VWL)
Energy Technology Data Exchange (ETDEWEB)
De Kleine, Robert D. [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States); Keoleian, Gregory A., E-mail: gregak@umich.edu [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States); Kelly, Jarod C. [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States)
2011-06-15
A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: > Optimal replacement schedules for residential central air conditioners were found. > Minimizing energy required more frequent replacement than minimizing consumer cost. > Significant variation in optimal replacement was observed for Michigan and Texas. > Rebates for altering replacement patterns are not cost effective for GHG abatement. > Maintenance levels were significant in determining the energy and GHG impacts.
Institute of Scientific and Technical Information of China (English)
2007-01-01
In this paper, we are concerned with the partial regularity for the weak solutions of energy minimizing p-harmonic maps under the controllable growth condition. We get the interior partial regularity by the p-harmonic approximation method together with the technique used to get the decay estimation on some Degenerate elliptic equations and the obstacle problem by Tan and Yan. In particular, we directly get the optimal regularity.
Constrained simulation of the Bullet Cluster
Energy Technology Data Exchange (ETDEWEB)
Lage, Craig; Farrar, Glennys, E-mail: csl336@nyu.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003 (United States)
2014-06-01
In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.
Pietersen, CY; Bosker, FJ; Posterna, F; den Boer, JA
2006-01-01
Many fear conditioning studies use electric shock as the aversive stimulus. The intensity of shocks varies throughout the literature. In this study, shock intensities ranging from 0 to 1.5 mA were used, and the effects on the rats assessed by both behavioural and biochemical stress parameters. Resul
Pietersen, C.Y.; Bosker, F.J; Posterna, F.; Den Boer, J.A.
2006-01-01
Many fear conditioning studies use electric shock as the aversive stimulus. The intensity of shocks varies throughout the literature. In this study, shock intensities ranging from 0 to 1.5 mA were used, and the effects on the rats assessed by both behavioural and biochemical stress parameters. Resul
Wemmenhove, Ellen; van Valenberg, Hein J F; Zwietering, Marcel H; van Hooijdonk, Toon C M; Wells-Bennik, Marjon H J
2016-09-01
Minimal inhibitory concentrations (MICs) of undissociated lactic acid were determined for six different Listeria monocytogenes strains at 30 °C and in a pH range of 4.2-5.8. Small increments in pH and acid concentrations were used to accurately establish the growth/no growth limits of L. monocytogenes for these acids. The MICs of undissociated lactic acid in the pH range of 5.2-5.8 were generally higher than at pH 4.6 for the different L. monocytogenes strains. The average MIC of undissociated lactic acid was 5.0 (SD 1.5) mM in the pH range 5.2-5.6, which is relevant to Gouda cheese. Significant differences in MICs of undissociated lactic acid were found between strains of L. monocytogenes at a given pH, with a maximum observed level of 9.0 mM. Variations in MICs were mostly due to strain variation. In the pH range 5.2-5.6, the MICs of undissociated lactic acid were not significantly different at 12 °C and 30 °C. The average MICs of undissociated acetic acid, citric acid, and propionic acid were 19.0 (SD 6.5) mM, 3.8 (SD 0.9) mM, and 11.0 (SD 6.3) mM, respectively, for the six L. monocytogenes strains tested in the pH range 5.2-5.6. Variations in MICs of these organic acids for L. monocytogenes were also mostly due to strain variation. The generated data contribute to improved predictions of growth/no growth of L. monocytogenes in cheese and other foods containing these organic acids.
MERIT FUNCTION AND GLOBAL ALGORITHM FOR BOX CONSTRAINED VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
张立平; 高自友; 赖炎连
2002-01-01
The authors consider optimization methods for box constrained variational inequalities. First, the authors study the KKT-conditions problem based on the original problem. A merit function for the KKT-conditions problem is proposed, and some desirable properties of the merit function are obtained. Through the merit function, the original problem is reformulated as minimization with simple constraints. Then, the authors show that any stationary point of the optimization problem is a solution of the original problem. Finally, a descent algorithm is presented for the optimization problem, and global convergence is shown.
Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A
2016-04-01
A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained
Constrained Simulation of the Bullet Cluster
Lage, Craig
2013-01-01
In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and 0.5 - 2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full 2D observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with NFW dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocitie...
Constrained Optimization of Discontinuous Systems
Y.M. Ermoliev; V.I. Norkin
1996-01-01
In this paper we extend the results of Ermoliev, Norkin and Wets [8] and Ermoliev and Norkin [7] to the case of constrained discontinuous optimization problems. In contrast to [7] the attention is concentrated on the proof of general optimality conditions for problems with nonconvex feasible sets. Easily implementable random search technique is proposed.
Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.
2011-01-01
The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.
Herrera-Aguilar, Alfredo; Mora-Luna, Refugio Rigel; Quiros, Israel
2011-01-01
We consider warped five-dimensional thick braneworlds with four-dimensional Poincare invariance originated from bulk scalar matter non-minimally coupled to gravity plus a Gauss-Bonnet term. The background field equations as well as the perturbed equations are investigated. A relationship between 4D and 5D Planck masses is studied in general terms. By imposing finiteness of the 4D Planck mass and regularity of the geometry, the localization properties of the tensor modes of the perturbed geometry are analyzed to first order, for a wide class of solutions. In order to explore the gravity localization properties for this model, the normalizability condition for the lowest level of the tensor fluctuations is analyzed. It is found that for the examined class of solutions, gravity in 4 dimensions is recovered if and only if the curvature invariants are regular and the 4D Planck mass is finite. It turns out that both the addition of the Gauss-Bonnet term and the non-minimal coupling between the scalar field and grav...
Constrained Multiobjective Biogeography Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hongwei Mo
2014-01-01
Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Proton Decay in Minimal Supersymmetric SU(5)
Bajc, Borut; Perez, Pavel Fileviez; Senjanovic, Goran
2002-01-01
We systematically study proton decay in the minimal supersymmetric SU(5) grand unified theory. We find that although the available parameter space of soft masses and mixings is quite constrained, the theory is still in accord with experiment.
Method of constrained global optimization
Energy Technology Data Exchange (ETDEWEB)
Altschuler, E.L.; Williams, T.J.; Ratner, E.R.; Dowla, F.; Wooten, F. (Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551 (United States) Department of Applied Physics, Stanford University, Stanford, California 94305 (United States) Department of Applied Science, University of California, Davis/Livermore, P.O. Box 808, Livermore, California 94551 (United States))
1994-04-25
We present a new method for optimization: constrained global optimization (CGO). CGO iteratively uses a Glauber spin flip probability and the Metropolis algorithm. The spin flip probability allows changing only the values of variables contributing excessively to the function to be minimized. We illustrate CGO with two problems---Thomson's problem of finding the minimum-energy configuration of unit charges on a spherical surface, and a problem of assigning offices---for which CGO finds better minima than other methods. We think CGO will apply to a wide class of optimization problems.
Energy Technology Data Exchange (ETDEWEB)
Hughes, Philip A.; Aller, Margo F.; Aller, Hugh D., E-mail: phughes@umich.edu, E-mail: mfa@umich.edu, E-mail: haller@umich.edu [Astronomy Department, University of Michigan, Ann Arbor, MI 48109-1107 (United States)
2015-02-01
We analyze the shock-in-jet models for the γ-ray flaring blazars 0420-014, OJ 287, and 1156+295 presented in Paper I, quantifying how well the modeling constrains internal properties of the flow (low-energy spectral cutoff, partition between random and ordered magnetic field), the flow dynamics (quiescent flow speed and orientation), and the number and strength of the shocks responsible for radio-band flaring. We conclude that well-sampled, multifrequency polarized flux light curves are crucial for defining source properties. We argue for few, if any, low-energy particles in these flows, suggesting no entrainment and efficient energization of jet material, and for approximate energy equipartition between the random and ordered magnetic field components, suggesting that the ordered field is built by nontrivial dynamo action from the random component, or that the latter arises from a jet instability that preserves the larger-scale, ordered flow. We present evidence that the difference between orphan radio-band (no γ-ray counterpart) and non-orphan flares is due to more complex shock interactions in the latter case.
Tsiplova, Kate; Pullenayegum, Eleanor; Cooke, Tim; Xie, Feng
2016-12-01
The purpose of the study is to estimate the EQ-5D-derived health utilities associated with selected chronic conditions (hypertension, heart disease, arthritis, asthma or COPD, cancer, diabetes, chronic back pain, and anxiety or depression) and to estimate minimally important differences (MID) based on the Commonwealth Fund Survey of Sicker Adults in Canada. We used a cross-sectional survey of 3765 sick adults in Canada conducted in 2011 by the Commonwealth Fund. Health utilities were calculated for the entire sample and for each of the eight chronic health conditions. An ordinary least squares regression was used to estimate the utility decrement associated with these conditions with and without adjustment for socio-demographic factors. The MIDs were estimated using the anchor- and distribution-based methods. The adjusted utility decrement varied from 0.028 (95 % confidence interval (CI) -0.049, -0.008) for diabetes to 0.124 (95 % CI -0.142, -0.105) for anxiety or depression. The anchor-based MID for the entire group was 0.044 (95 % CI 0.025, 0.062), and the distribution-based MID for the entire group was 0.091. The condition-specific MIDs using the distribution-based method ranged from 0.089 for cancer to 0.108 for asthma or COPD. The MID estimated by the distribution-based method was larger than the MID estimated by the anchor-based method, indicating that the choice of method matters. The impact of arthritis, anxiety or depression, and chronic back pain on health utility was substantial, all exceeding or approximating the MID estimated using either anchor- or distribution-based methods.
Petculescu, Andi; Riner, Joshua
2010-10-01
Usually, the energy released as air-coupled sound following a collision is dismissed as negligible. The goal of this Letter is to quantify the value of this small but measurable quantity, since it can be useful to impact studies. Measurements of sound radiation from binary collisions of polypropylene balls were performed in order to constrain the fraction of incident energy radiated as sound in air. In the experiments, one ball is released from rest, directly above a stationary target ball. The transient acoustic waveforms are detected by a microphone rotated about the impact point at a radius of 10 cm. The sound pressure was measured as a function of the polar angle θ (the azimuthal symmetry of the problem was verified by rotating the microphone in the horizontal plane). The angular pattern has two main lobes that are asymmetric with respect to the impact plane. This asymmetry is ascribable to interference and/or scattering effects. Gaps in the acoustic measurements at the "poles" (i.e., around 0° and 180°) pose a challenge similar to that of extrapolating the cosmic microwave background in the galactic "cut." The data was continued in the gaps by polynomial interpolation rather than least-squares fitting, a choice dictated by the accuracy of the reconstructed pattern. The acoustic energy radiated during the impact, estimated by multiplying the collision time by the sound intensity integrated over a spherical surface centered at the impact point, is calculated as four orders of magnitude smaller than the incident energy (0.23 μJ versus 1.6 mJ).
The recursion operator for a constrained CKP hierarchy
Li, Chuanzhong; He, Jingsong; Cheng, Yi
2010-01-01
This paper gives a recursion operator for a 1-constrained CKP hierarchy, and by the recursion operator it proves that the 1-constrained CKP hierarchy can be reduced to the mKdV hierarchy under condition $q=r$.
The recursion operator for a constrained CKP hierarchy
Li, Chuanzhong; Tian, Kelei; He, Jingsong; Cheng, Yi
2010-01-01
This paper gives a recursion operator for a 1-constrained CKP hierarchy, and by the recursion operator it proves that the 1-constrained CKP hierarchy can be reduced to the mKdV hierarchy under condition $q=r$.
Geometric constrained variational calculus. III: The second variation (Part II)
Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico
2016-03-01
The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.
Energy Technology Data Exchange (ETDEWEB)
Gilson, Erik P. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States)]. E-mail: egilson@pppl.gov; Chung, Moses [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Dorf, Mikhail [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Efthimion, Philip C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Grote, David P. [Lawrence Livermore National Laboratory, University of California, Livermore, CA 94550 (United States); Majeski, Richard [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States)
2007-07-01
The Paul Trap Simulator Experiment (PTSX) is a compact laboratory Paul trap that simulates propagation of a long, thin charged-particle bunch coasting through a multi-kilometer-long magnetic alternating-gradient (AG) transport system by putting the physicist in the frame-of-reference of the beam. The transverse dynamics of particles in both systems are described by the same sets of equations-including all nonlinear space-charge effects. The time-dependent quadrupolar voltages applied to the PTSX confinement electrodes correspond to the axially dependent magnetic fields applied in the AG system. This paper presents the results of experiments in which the amplitude of the applied confining voltage is changed over the course of the experiment in order to transversely compress a beam with an initial depressed tune {nu}/{nu} {sub 0}{approx}0.9. Both instantaneous and smooth changes are considered. Particular emphasis is placed on determining the conditions that minimize the emittance growth and, generally, the number of particles that are found at large radius (so-called halo particles) after the beam compression. The experimental data are also compared with the results of particle-in-cell (PIC) simulations performed with the WARP code.
Impulsive differential inclusions with constrains
Directory of Open Access Journals (Sweden)
Tzanko Donchev
2006-05-01
Full Text Available In the paper, we study weak invariance of differential inclusions with non-fixed time impulses under compactness type assumptions. When the right-hand side is one sided Lipschitz an extension of the well known relaxation theorem is proved. In this case also necessary and sufficient condition for strong invariance of upper semi continuous systems are obtained. Some properties of the solution set of impulsive system (without constrains in appropriate topology are investigated.
Adaptive Alternating Minimization Algorithms
Niesen, Urs; Wornell, Gregory
2007-01-01
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...
On optimal solutions of the constrained ℓ 0 regularization and its penalty problem
Zhang, Na; Li, Qia
2017-02-01
The constrained {{\\ell}0} regularization plays an important role in sparse reconstruction. A widely used approach for solving this problem is the penalty method, of which the least square penalty problem is a special case. However, the connections between global minimizers of the constrained {{\\ell}0} problem and its penalty problem have never been studied in a systematic way. This work provides a comprehensive investigation on optimal solutions of these two problems and their connections. We give detailed descriptions of optimal solutions of the two problems, including existence, stability with respect to the parameter, cardinality and strictness. In particular, we find that the optimal solution set of the penalty problem is piecewise constant with respect to the penalty parameter. Then we analyze in-depth the relationship between optimal solutions of the two problems. It is shown that, in the noisy case the least square penalty problem probably has no common optimal solutions with the constrained {{\\ell}0} problem for any penalty parameter. Under a mild condition on the penalty function, we establish that the penalty problem has the same optimal solution set as the constrained {{\\ell}0} problem when the penalty parameter is sufficiently large. Based on the conditions, we further propose exact penalty problems for the constrained {{\\ell}0} problem. Finally, we present a numerical example to illustrate our main theoretical results.
Constrained Geodesic Centers of a Simple Polygon
Oh, Eunjin; Son, Wanbin; Ahn, Hee-Kap
2016-01-01
For any two points in a simple polygon P, the geodesic distance between them is the length of the shortest path contained in P that connects them. A geodesic center of a set S of sites (points) with respect to P is a point in P that minimizes the geodesic distance to its farthest site. In many realistic facility location problems, however, the facilities are constrained to lie in feasible regions. In this paper, we show how to compute the geodesic centers constrained to a set of line segment...
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Directory of Open Access Journals (Sweden)
Sergio Delpino
2008-06-01
Full Text Available In the Sierra de Valle Fértil, evidence of granulite facies metamorphism have been preserved either in the constitutive associations as in deformation mechanisms in minerals from biotite-garnet and cordierite-sillimanite gneisses, cordierite and garnet-cordierite migmatites, metagabbros, metatonalites-metadiorites and mafic dikes. The main recognized deformation mechanisms are: 1 quartz: a dynamic recrystallisation of quartz-feldspar boundaries, b combination of basal and prism [c] slip; 2 K-feldspar: grain boundary migration recrystallisation; 3 plagioclase: combination of grain boundary migration recrystallisation and subgrain rotation recrystallisation; 4 cordierite: subgrain rotation recrystallisation; 5 hornblende: grain boundary migration recrystallisation. Preliminary geothermometry on gabbroic rocks and the construction of an appropriated petrogenetic grid, allow us to establish temperatures in the range 800-850 C and pressures under 5 Kb for the metamorphic climax. Estimated metamorphic peak conditions, preliminary geothermobarometry on specific lithologic types and textural relationships, together indicate an counter-clockwise P-T path for the metamorphic evolution of the rocks of the area. Ductile deformation of phases resulting from anatexis linked to the metamorphic climax indicates that the higher-temperature ductile event recognized in the study area took place after the metamorphic peak. Evidence of ductile deformation of cordierite within its stability field and presence of chessboard extinction in quartz (only possible above the Qtzα/Qtzß transformation curve, both indicate temperatures above 700 C considering pressures greater than 5 Kb. Based on the established P-T trajectory and the characteristics described above, it can be concluded that deformation mechanisms affecting the Sierra de Valle Fértil rocks were developed entirely within the granulite facies field.
Barbieri, Riccardo; Harigaya, Keisuke
2016-01-01
In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z2 parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z2 breaking, can generate the Z2 breaking in the Higgs sector necessary for the Twin Higgs mechanism, and has constrained and correlated signals in invisible Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z2 breaking from the vacuum expectation values of B-L breaking fields are also discussed.
On Constrained Facility Location Problems
Institute of Scientific and Technical Information of China (English)
Wei-Lin Li; Peng Zhang; Da-Ming Zhu
2008-01-01
Given m facilities each with an opening cost, n demands, and distance between every demand and facility,the Facility Location problem finds a solution which opens some facilities to connect every demand to an opened facility such that the total cost of the solution is minimized. The k-Facility Location problem further requires that the number of opened facilities is at most k, where k is a parameter given in the instance of the problem. We consider the Facility Location problems satisfying that for every demand the ratio of the longest distance to facilities and the shortest distance to facilities is at most w, where w is a predefined constant. Using the local search approach with scaling technique and error control technique, for any arbitrarily small constant ∈ > 0, we give a polynomial-time approximation algorithm for the ω-constrained Facility Location problem with approximation ratio 1 + √ω + 1 + ∈, which significantly improves the previous best known ratio (ω + 1)/α for some 1 ≤α≤ 2, and a polynomial-time approximation algorithm for the ω-constrained κ-Facility Location problem with approximation ratio ω + 1 + ∈. On the aspect of approximation hardness, we prove that unless NP (C) DTIME(nO(loglogn)), the ω-constrained Facility Location problem cannot be approximated within 1 + √ω-1,which slightly improves the previous best known hardness result 1.243 + 0.316 ln(ω - 1). The experimental results on the standard test instances of Facility Location problem show that our algorithm also has good performance in practice.
Piazza, Federico
2015-01-01
The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...
Piazza, Federico; Schücker, Thomas
2016-04-01
The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.
AN INTERIOR TRUST REGION ALGORITHM FOR NONLINEAR MINIMIZATION WITH LINEAR CONSTRAINTS
Institute of Scientific and Technical Information of China (English)
Jian-guo Liu
2002-01-01
An interior trust-region-based algorithm for linearly constrained minimization problems is proposed and analyzed. This algorithm is similar to trust region algorithms forunconstrained minimization: a trust region subproblem on a subspace is solved in eachiteration. We establish that the proposed algorithm has convergence properties analogousto those of the trust region algorithms for unconstrained minimization. Namely, every limitpoint of the generated sequence satisfies the Krush-Kuhn-Tucker (KKT) conditions andat least one limit point satisfies second order necessary optimality conditions. In adidition,if one limit point is a strong local minimizer and the Hessian is Lipschitz continuous in aneighborhood of that point, then the generated sequence converges globally to that pointin the rate of at least 2-step quadratic. We are mainly concerned with the theoretical properties of the algorithm in this paper. Implementation issues and adaptation to large-scaleproblems will be addressed in a future report.
Choosing health, constrained choices.
Chee Khoon Chan
2009-12-01
In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.
Constrained optimization using CODEQ
Energy Technology Data Exchange (ETDEWEB)
Omran, Mahamed G.H. [Department of Computer Science, Gulf University for Science and Technology, P.O. Box 7207, Hawally 32093 (Kuwait)], E-mail: omran.m@gust.edu.kw; Salman, Ayed [Computer Engineering Department, Kuwait University, P.O. Box 5969, Safat 13060 (Kuwait)], E-mail: ayed@eng.kuniv.edu.kw
2009-10-30
Many real-world optimization problems are constrained problems that involve equality and inequality constraints. CODEQ is a new, parameter-free meta-heuristic algorithm that is a hybrid of concepts from chaotic search, opposition-based learning, differential evolution and quantum mechanics. The performance of the proposed approach when applied to five constrained benchmark problems is investigated and compared with other approaches proposed in the literature. The experiments conducted show that CODEQ provides excellent results with the added advantage of no parameter tuning.
... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...
Arveson, W
1995-01-01
It is known that every semigroup of normal completely positive maps of a von Neumann can be ``dilated" in a particular way to an E_0-semigroup acting on a larger von Neumann algebra. The E_0-semigroup is not uniquely determined by the completely positive semigroup; however, it is unique (up to conjugacy) provided that certain conditions of {\\it minimality} are met. Minimality is a subtle property, and it is often not obvious if it is satisfied for specific examples even in the simplest case where the von Neumann algebra is \\Cal B(H). In this paper we clarify these issues by giving a new characterization of minimality in terms projective cocycles and their limits. Our results are valid for semigroups of endomorphisms acting on arbitrary von Neumann algebras with separable predual.
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
Locally minimal topological groups
enhofer, Lydia Au\\ss; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the NSS property, establishing that under certain conditions, locally minimal NSS groups are metrizable. A symmetric subset of an abelian group containing zero is said to be a GTG set if it generates a group topology in an analogous way as convex and symmetric subsets are unit balls for pseudonorms on a vector space. We consider topological groups which have a neighborhood basis at zero consisting of GTG sets. Examples of these locally GTG groups are: locally pseudo--convex spaces, groups uniformly free from small subgroups (...
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.;
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...
On Quantum Channel Estimation with Minimal Resources
Zorzi, M; Ferrante, A
2011-01-01
We determine the minimal experimental resources that ensure a unique solution in the estimation of trace-preserving quantum channels with both direct and convex optimization methods. A convenient parametrization of the constrained set is used to develop a globally converging Newton-type algorithm that ensures a physically admissible solution to the problem. Numerical simulations are provided to support the results, and indicate that the minimal experimental setting is sufficient to guarantee good estimates.
Suwono, A.; Indartono, Y. S.; Irsyad, M.; Al-Afkar, I. C.
2015-09-01
One way to resolve the energy problem is to increase the efficiency of energy use. Air conditioning system is one of the equipment that needs to be considered, because it is the biggest energy user in commercial building sector. Research currently developing is the use of phase change materials (PCM) as thermal energy storage (TES) in the air conditioning system to reduce energy consumption. Salt hydrates have been great potential to be developed because they have been high latent heat and thermal conductivity. This study has used a salt hydrate from calcium chloride to be tested in air conditioning systems type chiller. Thermal characteristics were examined using temperature history (T-history) test and differential scanning calorimetry (DSC). The test results showed that the thermal characteristics of the salt hydrate has been a high latent heat and in accordance with the evaporator temperature. The use of salt hydrates in air conditioning system type chiller can reduce energy consumption by 51.5%.
Energy Technology Data Exchange (ETDEWEB)
Davidon, W.C.; Nazareth, L.
1977-08-01
A derivative-free implementation of Davidson's Optimally Conditioned Method for unconstrained optimization is described, and computational experience on a set of test problems is given. 3 tables.
Energy Technology Data Exchange (ETDEWEB)
Aller, M. F.; Hughes, P. A.; Aller, H. D.; Latimer, G. E. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109-1042 (United States); Hovatta, T., E-mail: mfa@umich.edu [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States)
2014-08-10
To investigate parsec-scale jet flow conditions during GeV γ-ray flares detected by the Fermi Large Angle Telescope, we obtained centimeter-band total flux density and linear polarization monitoring observations from 2009.5 through 2012.5 with the 26 m Michigan radio telescope for a sample of core-dominated blazars. We use these data to constrain radiative transfer simulations incorporating propagating shocks oriented at an arbitrary angle to the flow direction in order to set limits on the jet flow and shock parameters during flares temporally associated with γ-ray flares in 0420–014, OJ 287, and 1156+295; these active galactic nuclei exhibited the expected signature of shocks in the linear polarization data. Both the number of shocks comprising an individual radio outburst (3 and 4) and the range of the compression ratios of the individual shocks (0.5-0.8) are similar in all three sources; the shocks are found to be forward-moving with respect to the flow. While simulations incorporating transverse shocks provide good fits for 0420–014 and 1156+295, oblique shocks are required for modeling the OJ 287 outburst, and an unusually low value of the low-energy cutoff of the radiating particles' energy distribution is also identified. Our derived viewing angles and shock speeds are consistent with independent Very Long Baseline Array results. While a random component dominates the jet magnetic field, as evidenced by the low fractional linear polarization, to reproduce the observed spectral character requires that a significant fraction of the magnetic field energy is in an ordered axial component. Both straight and low pitch angle helical field lines are viable scenarios.
Processing Constrained K Closest Pairs Query in Spatial Databases
Institute of Scientific and Technical Information of China (English)
LIU Xiaofeng; LIU Yunsheng; XIAO Yingyuan
2006-01-01
In this paper, constrained K closest pairs query is introduced, which retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R-trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them,two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.
Fast alternating projection methods for constrained tomographic reconstruction.
Liu, Li; Han, Yongxin; Jin, Mingwu
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.
Lectures on Constrained Systems
Date, Ghanashyam
2010-01-01
These lecture notes were prepared as a basic introduction to the theory of constrained systems which is how the fundamental forces of nature appear in their Hamiltonian formulation. Only a working knowledge of Lagrangian and Hamiltonian formulation of mechanics is assumed. These notes are based on the set of eight lectures given at the {\\em Refresher Course for College Teachers} held at IMSc during May-June, 2005. These are submitted to the arxiv for an easy access to a wider body of students.
Constraining entropic cosmology
Energy Technology Data Exchange (ETDEWEB)
Koivisto, Tomi S. [Institute for Theoretical Physics and the Spinoza Institute, Utrecht University, Leuvenlaan 4, Postbus 80.195, 3508 TD Utrecht (Netherlands); Mota, David F. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zumalacárregui, Miguel, E-mail: t.s.koivisto@uu.nl, E-mail: d.f.mota@astro.uio.no, E-mail: miguelzuma@icc.ub.edu [Institute of Cosmos Sciences (ICC-IEEC), University of Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)
2011-02-01
It has been recently proposed that the interpretation of gravity as an emergent, entropic phenomenon might have nontrivial implications to cosmology. Here several such approaches are investigated and the underlying assumptions that must be made in order to constrain them by the BBN, SneIa, BAO and CMB data are clarified. Present models of inflation or dark energy are ruled out by the data. Constraints are derived on phenomenological parameterizations of modified Friedmann equations and some features of entropic scenarios regarding the growth of perturbations, the no-go theorem for entropic inflation and the possible violation of the Bekenstein bound for the entropy of the Universe are discussed and clarified.
Symmetrically Constrained Compositions
Beck, Matthias; Lee, Sunyoung; Savage, Carla D
2009-01-01
Given integers $a_1, a_2, ..., a_n$, with $a_1 + a_2 + ... + a_n \\geq 1$, a symmetrically constrained composition $\\lambda_1 + lambda_2 + ... + lambda_n = M$ of $M$ into $n$ nonnegative parts is one that satisfies each of the the $n!$ constraints ${\\sum_{i=1}^n a_i \\lambda_{\\pi(i)} \\geq 0 : \\pi \\in S_n}$. We show how to compute the generating function of these compositions, combining methods from partition theory, permutation statistics, and lattice-point enumeration.
CONSTRAINED RATIONAL CUBIC SPLINE AND ITS APPLICATION
Institute of Scientific and Technical Information of China (English)
Qi Duan; Huan-ling Zhang; Xiang Lai; Nan Xie; Fu-hua (Frank) Cheng
2001-01-01
In this paper, a kind of rational cubic interpolation functionwith linear denominator is constructed. The constrained interpolation with constraint on shape of the interpolating curves and on the second-order derivative of the interpolating function is studied by using this interpolation, and as the consequent result, the convex interpolation conditions have been derived.
Kerner, Boris S.
2016-09-01
We have revealed general physical conditions for the maximization of the network throughput at which free flow conditions are ensured, i.e., traffic breakdown cannot occur in the whole traffic or transportation network. A physical measure of the network - network capacity is introduced that characterizes general features of the network with respect to the maximization of the network throughput. The network capacity allows us also to make a general proof of the deterioration of traffic system occurring when dynamic traffic assignment is performed in a network based on the classical Wardrop' user equilibrium (UE) and system optimum (SO) equilibrium.
M.W. Huston (Marshall W.); N.P. van Til (Niek); T.P. Visser (Trudi); S.H. Arshad (Syed); M.H. Brugman (Martijn); C. Cattoglio (Claudia); A. Nowrouzi (Ali); Y.J. Li (Yi); A. Schambach (Axel); M.K. Schmidt (Marjanka); J.A.C. Baum (Joel); C. von Kalle (Christof); F. Mavilio (Fulvio); F. Zhang (Fang); M.P. Blundell (Mike P.); A.J. Thrasher (A. J.); M.M.A. Verstegen (Monique); G. Wagemaker (Gerard)
2011-01-01
textabstractClinical trials have demonstrated the potential of ex vivo hematopoietic stem cell gene therapy to treat X-linked severe combined immunodeficiency (SCID-X1) using γ-retroviral vectors, leading to immune system functionality in the majority of treated patients without pretransplant condit
Space Constrained Dynamic Covering
Antonellis, Ioannis; Dughmi, Shaddin
2009-01-01
In this paper, we identify a fundamental algorithmic problem that we term space-constrained dynamic covering (SCDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SCDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X={1,2,...,n} and I={S_1, ..., S_m}, S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SCDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), sometimes even as little as O((m+n)polylog(mn)) space. The goal of SCDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We present algorithms a...
Directory of Open Access Journals (Sweden)
Smalley John V
2006-05-01
Full Text Available Abstract Background The acquisition of high-quality DNA for use in phylogenetic and molecular population genetic studies is a primary concern for evolutionary and genetic researchers. Many non-destructive DNA sampling methods have been developed and are used with a variety of taxa in applications ranging from genetic stock assessment to molecular forensics. Results The authors have developed a field sampling method for obtaining high-quality DNA from sunfish (Lepomis and other freshwater fish that employs a variation on the buccal swab method and results in the collection of DNA suitable for PCR amplification and polymorphism analysis. Additionally, since the circumstances of storage are always a concern for field biologists, the authors have tested the potential storage conditions of swabbed samples and whether those conditions affect DNA extraction and PCR amplification. It was found that samples stored at room temperature in the dark for over 200 days could still yield DNA suitable for PCR amplification and polymorphism detection. Conclusion These findings suggest that valuable molecular genetic data may be obtained from tissues that have not been treated or stored under optimal field conditions. Furthermore, it is clear that the lack of adequately low temperatures during transport and long term storage should not be a barrier to anyone wishing to engage in field-based molecular genetic research.
Huston, Marshall W; van Til, Niek P; Visser, Trudi P; Arshad, Shazia; Brugman, Martijn H; Cattoglio, Claudia; Nowrouzi, Ali; Li, Yuedan; Schambach, Axel; Schmidt, Manfred; Baum, Christopher; von Kalle, Christof; Mavilio, Fulvio; Zhang, Fang; Blundell, Mike P; Thrasher, Adrian J; Verstegen, Monique M A; Wagemaker, Gerard
2011-10-01
Clinical trials have demonstrated the potential of ex vivo hematopoietic stem cell gene therapy to treat X-linked severe combined immunodeficiency (SCID-X1) using γ-retroviral vectors, leading to immune system functionality in the majority of treated patients without pretransplant conditioning. The success was tempered by insertional oncogenesis in a proportion of the patients. To reduce the genotoxicity risk, a self-inactivating (SIN) lentiviral vector (LV) with improved expression of a codon optimized human interleukin-2 receptor γ gene (IL2RG) cDNA (coγc), regulated by its 1.1 kb promoter region (γcPr), was compared in efficacy to the viral spleen focus forming virus (SF) and the cellular phosphoglycerate kinase (PGK) promoters. Pretransplant conditioning of Il2rg(-/-) mice resulted in long-term reconstitution of T and B lymphocytes, normalized natural antibody titers, humoral immune responses, ConA/IL-2 stimulated spleen cell proliferation, and polyclonal T-cell receptor gene rearrangements with a clear integration preference of the SF vector for proto-oncogenes, contrary to the PGK and γcPr vectors. We conclude that SIN lentiviral gene therapy using coγc driven by the γcPr or PGK promoter corrects the SCID phenotype, potentially with an improved safety profile, and that low-dose conditioning proved essential for immune competence, allowing for a reduced threshold of cell numbers required.
Nogami, Hirofumi; Arai, Shozo; Okada, Hironao; Zhan, Lan; Itoh, Toshihiro
2017-03-27
Monitoring rumen conditions in cows is important because a dysfunctional rumen system may cause death. Sub-acute ruminal acidosis (SARA) is a typical disease in cows, and is characterized by repeated periods of low ruminal pH. SARA is regarded as a trigger for rumen atony, rumenitis, and abomasal displacement, which may cause death. In previous studies, rumen conditions were evaluated by wireless sensor nodes with pH measurement capability. The primary advantage of the pH sensor is its ability to continuously measure ruminal pH. However, these sensor nodes have short lifetimes since they are limited by the finite volume of the internal liquid of the reference electrode. Mimicking rumen atony, we attempt to evaluate the rumen condition using wireless sensor nodes with three-axis accelerometers. The theoretical life span of such sensor nodes depends mainly on the transmission frequency of acceleration data and the size of the battery, and the proposed sensor nodes are 30.0 mm in diameter and 70.0 mm in length and have a life span of over 600 days. Using the sensor nodes, we compare the rumen motility of the force transducer measurement with the three-axis accelerometer data. As a result, we can detect discriminative movement of rumen atony.
Directory of Open Access Journals (Sweden)
M. Venkatesulu
1996-01-01
Full Text Available Solutions of initial value problems associated with a pair of ordinary differential systems (L1,L2 defined on two adjacent intervals I1 and I2 and satisfying certain interface-spatial conditions at the common end (interface point are studied.
QCD strings as constrained grassmannian sigma model
Viswanathan, K S; Viswanathan, K S; Parthasarathy, R
1995-01-01
We present calculations for the effective action of string world sheet in R3 and R4 utilizing its correspondence with the constrained Grassmannian sigma model. Minimal surfaces describe the dynamics of open strings while harmonic surfaces describe that of closed strings. The one-loop effective action for these are calculated with instanton and anti-instanton background, reprsenting N-string interactions at the tree level. The effective action is found to be the partition function of a classical modified Coulomb gas in the confining phase, with a dynamically generated mass gap.
Oberacker, V E
2015-01-01
In this manuscript we provide an outline of the numerical methods used in implementing the density constrained time-dependent Hartree-Fock (DC-TDHF) method and provide a few examples of its application to nuclear fusion. In this approach, dynamic microscopic calculations are carried out on a three-dimensional lattice and there are no adjustable parameters, the only input is the Skyrme effective NN interaction. After a review of the DC-TDHF theory and the numerical methods, we present results for heavy-ion potentials $V(R)$, coordinate-dependent mass parameters $M(R)$, and precompound excitation energies $E^{*}(R)$ for a variety of heavy-ion reactions. Using fusion barrier penetrabilities, we calculate total fusion cross sections $\\sigma(E_\\mathrm{c.m.})$ for reactions between both stable and neutron-rich nuclei. We also determine capture cross sections for hot fusion reactions leading to the formation of superheavy elements.
Constrained Sparse Galerkin Regression
Loiseau, Jean-Christophe
2016-01-01
In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
Constrained Task Assignment and Scheduling On Networks of Arbitrary Topology
Jackson, Justin Patrick
This dissertation develops a framework to address centralized and distributed constrained task assignment and task scheduling problems. This framework is used to prove properties of these problems that can be exploited, develop effective solution algorithms, and to prove important properties such as correctness, completeness and optimality. The centralized task assignment and task scheduling problem treated here is expressed as a vehicle routing problem with the goal of optimizing mission time subject to mission constraints on task precedence and agent capability. The algorithm developed to solve this problem is able to coordinate vehicle (agent) timing for task completion. This class of problems is NP-hard and analytical guarantees on solution quality are often unavailable. This dissertation develops a technique for determining solution quality that can be used on a large class of problems and does not rely on traditional analytical guarantees. For distributed problems several agents must communicate to collectively solve a distributed task assignment and task scheduling problem. The distributed task assignment and task scheduling algorithms developed here allow for the optimization of constrained military missions in situations where the communication network may be incomplete and only locally known. Two problems are developed. The distributed task assignment problem incorporates communication constraints that must be satisfied; this is the Communication-Constrained Distributed Assignment Problem. A novel distributed assignment algorithm, the Stochastic Bidding Algorithm, solves this problem. The algorithm is correct, probabilistically complete, and has linear average-case time complexity. The distributed task scheduling problem addressed here is to minimize mission time subject to arbitrary predicate mission constraints; this is the Minimum-time Arbitrarily-constrained Distributed Scheduling Problem. The Optimal Distributed Non-sequential Backtracking Algorithm
Cost minimization and asset pricing
Robert G. Chambers; John Quiggin
2005-01-01
A cost-based approach to asset-pricing equilibrium relationships is developed. A cost function induces a stochastic discount factor (pricing kernel) that is a function of random output, prices, and capital stockt. By eliminating opportunities for arbitrage between financial markets and the production technology, firms minimize the current cost of future consumption. The first-order conditions for this cost minimization problem generate the stochastic discount factor. The cost-based approach i...
Constrained model predictive control, state estimation and coordination
Yan, Jun
guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
Verde, Licia; Pigozzo, Cassio; Heavens, Alan F; Jimenez, Raul
2016-01-01
We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the $\\Lambda$CDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95\\% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter $\\Omega_{\\rm MR} < 0.006$ and extra radiation parameterised as extra effective neutrino species $2.3 < N_{\\rm eff} < 3.2$ when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond $\\Lambda$CDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way ...
Institute of Scientific and Technical Information of China (English)
朱德通
2000-01-01
使用导出的广义Fenchel对偶理论,获得了带有二次凸约束的二次凸规划问题的广义对偶形式和定理及其Kuhn-Tucker条件.进一步建立了Celis-Dennis-Tapia的信赖域子问题的对偶形式和最优性条件.%Using the generalized Fenchel's duality theory,we derive generalized duality theorems and related Kuhn-Tucker conditions for the minimization of a convex quadratic with some quadratic constraints．Further,we establish the duality programming and optimality conditions of the Celis-Dennis-Tapia trust region problem.
Distance-constrained grid colouring
Directory of Open Access Journals (Sweden)
Aszalós László
2016-06-01
Full Text Available Distance-constrained colouring is a mathematical model of the frequency assignment problem. This colouring can be treated as an optimization problem so we can use the toolbar of the optimization to solve concrete problems. In this paper, we show performance of distance-constrained grid colouring for two methods which are good in map colouring.
Quantizing Constrained Systems New Perspectives
Kaplan, L; Heller, E J
1997-01-01
We consider quantum mechanics on constrained surfaces which have non-Euclidean metrics and variable Gaussian curvature. The old controversy about the ambiguities involving terms in the Hamiltonian of order hbar^2 multiplying the Gaussian curvature is addressed. We set out to clarify the matter by considering constraints to be the limits of large restoring forces as the constraint coordinates deviate from their constrained values. We find additional ambiguous terms of order hbar^2 involving freedom in the constraining potentials, demonstrating that the classical constrained Hamiltonian or Lagrangian cannot uniquely specify the quantization: the ambiguity of directly quantizing a constrained system is inherently unresolvable. However, there is never any problem with a physical quantum system, which cannot have infinite constraint forces and always fluctuates around the mean constraint values. The issue is addressed from the perspectives of adiabatic approximations in quantum mechanics, Feynman path integrals, a...
Power-constrained supercomputing
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound
Synthesis of constrained analogues of tryptophan
Directory of Open Access Journals (Sweden)
Elisabetta Rossi
2015-10-01
Full Text Available A Lewis acid-catalysed diastereoselective [4 + 2] cycloaddition of vinylindoles and methyl 2-acetamidoacrylate, leading to methyl 3-acetamido-1,2,3,4-tetrahydrocarbazole-3-carboxylate derivatives, is described. Treatment of the obtained cycloadducts under hydrolytic conditions results in the preparation of a small library of compounds bearing the free amino acid function at C-3 and pertaining to the class of constrained tryptophan analogues.
Directory of Open Access Journals (Sweden)
Mihaela Holobiuc
2009-12-01
Full Text Available In the last decades the plants have to cope with the warming of the climate. As a consequence of this process more than half of the plant species could become vulnerable or threatened until 2080. Romania has a high plant diversity, with endemic and endangered plant species, the measures of biodiversity conservation being necessary. The integrated approach of biodiversity conservation involves both in situ and ex situ strategies. Among ex situ methods of conservation, besides the traditional ones (including field and botanic collection and seed banks, in vitro tissues techniques offer a viable alternative. The germplasm collections can efficiently preserve the species (of economic, scientific and conservative importance, in the same time being a source of plant material for international exchanges and for reintroduction in the native habitats.The "in vitro gene banking" term refers to in vitro tissues cultures from many accessions of a target species and involves the collection of plant material from field or from native habitats, the elaboration of sterilization, micropropagation and maintaining protocols. These collections have to be maintained in optimal conditions, morphologically and genetically characterized. The aim of our work was to characterize the response of the plant material to the minimal in vitro growth protocol for medium-term cultures achievement as a prerequisite condition for an active gene bank establishment in two rare Caryophyllaceae taxa: Dianthus spiculifolius and D. glacialis ssp. gelidus. Among different factors previously tested for medium-term preservation in Dianthus genus, mannitol proved to be more efficient for minimal cultures achievement. In vitro, the cultures were evaluated concerning their growth, regenerability and enzyme activity (POX, SOD, CAT as a response to the preservation conditions in the incipient phase of the initiation of the in vitro collection. The two species considered in this study showed a
Constraining QGP properties with CHIMERA
Garishvili, Irakli; Abelev, Betty; Cheng, Michael; Glenn, Andrew; Soltz, Ron
2011-10-01
Understanding essential properties of strongly interacting matter is arguably the most important goal of the relativistic heavy-ion programs both at RHIC and the LHC. In particular, constraining observables such as ratio of shear viscosity to entropy density, η/s, initial temperature, Tinit, and energy density is of critical importance. For this purpose we have developed CHIMERA, Comprehensive Heavy Ion Model Reporting and Evaluation Algorithm. CHIMERA is designed to facilitate global statistical comparison of results from our multi-stage hydrodynamics/hadron cascade model of heavy ion collisions to the key soft observables (HBT, elliptic flow, spectra) measured at RHIC and the LHC. Within this framework the data representing multiple different measurements from different experiments are compiled into single format. One of the unique features of CHIMERA is, that in addition to taking into account statistical errors, it also treats different types of systematic uncertainties. The hydrodynamics/hadron cascade model used in the framework incorporates different initial state conditions, pre-equilibrium flow, the UVH2+1 viscous hydro model, Cooper-Frye freezeout, and the UrQMD hadronic cascade model. The sensitivity of the observables to the equation of state (EoS) is explored using several EoS's in the hydrodynamic evolution. The latest results from CHIMERA, including data from the LHC, will be presented.
Minimal Coleman-Weinberg theory explains the diphoton excess
DEFF Research Database (Denmark)
Antipin, Oleg; Mojaza, Matin; Sannino, Francesco
2016-01-01
the introduction of an extra singlet scalar further coupled to new fermions. In this constrained setup the Higgs mass was close to the observed value and the new scalar mass was below a TeV scale. Here we first extend the previous analysis by taking into account the important difference between running mass......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...... and pole mass of the scalar states. We then investigate whether these theories can account for the 750 GeV excess in diphotons observed by the LHC collaborations. New QCD-colored fermions in the TeV mass range coupled to the new scalar state are needed to describe the excess. We further show, by explicit...
Logarithmic superconformal minimal models
Pearce, Paul A.; Rasmussen, Jørgen; Tartaglia, Elena
2014-05-01
The higher fusion level logarithmic minimal models {\\cal LM}(P,P';n) have recently been constructed as the diagonal GKO cosets {(A_1^{(1)})_k\\oplus (A_1^ {(1)})_n}/ {(A_1^{(1)})_{k+n}} where n ≥ 1 is an integer fusion level and k = nP/(P‧- P) - 2 is a fractional level. For n = 1, these are the well-studied logarithmic minimal models {\\cal LM}(P,P')\\equiv {\\cal LM}(P,P';1). For n ≥ 2, we argue that these critical theories are realized on the lattice by n × n fusion of the n = 1 models. We study the critical fused lattice models {\\cal LM}(p,p')_{n\\times n} within a lattice approach and focus our study on the n = 2 models. We call these logarithmic superconformal minimal models {\\cal LSM}(p,p')\\equiv {\\cal LM}(P,P';2) where P = |2p - p‧|, P‧ = p‧ and p, p‧ are coprime. These models share the central charges c=c^{P,P';2}=\\frac {3}{2}\\big (1-{2(P'-P)^2}/{P P'}\\big ) of the rational superconformal minimal models {\\cal SM}(P,P'). Lattice realizations of these theories are constructed by fusing 2 × 2 blocks of the elementary face operators of the n = 1 logarithmic minimal models {\\cal LM}(p,p'). Algebraically, this entails the fused planar Temperley-Lieb algebra which is a spin-1 Birman-Murakami-Wenzl tangle algebra with loop fugacity β2 = [x]3 = x2 + 1 + x-2 and twist ω = x4 where x = eiλ and λ = (p‧- p)π/p‧. The first two members of this n = 2 series are superconformal dense polymers {\\cal LSM}(2,3) with c=-\\frac {5}{2}, β2 = 0 and superconformal percolation {\\cal LSM}(3,4) with c = 0, β2 = 1. We calculate the bulk and boundary free energies analytically. By numerically studying finite-size conformal spectra on the strip with appropriate boundary conditions, we argue that, in the continuum scaling limit, these lattice models are associated with the logarithmic superconformal models {\\cal LM}(P,P';2). For system size N, we propose finitized Kac character formulae of the form q^{-{c^{P,P';2}}/{24}+\\Delta ^{P,P';2} _{r
Increasingly minimal bias routing
Energy Technology Data Exchange (ETDEWEB)
Bataineh, Abdulla; Court, Thomas; Roweth, Duncan
2017-02-21
A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).
How peer-review constrains cognition
DEFF Research Database (Denmark)
Cowley, Stephen
2015-01-01
Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far...... as ‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio......-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper’s own content...
Consistency of trace norm minimization
Bach, Francis
2007-01-01
Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.
Constraining the dark side with observations
Energy Technology Data Exchange (ETDEWEB)
Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)
2007-05-15
The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.
Energy Technology Data Exchange (ETDEWEB)
Rodriguez Rodriguez; Marco Helio [Comision Federal de Electricidad, Gerencia de Proyectos Geotermoelectricos, Residencia General de Cerro Prieto, Mexicali, Baja California (Mexico)]. E-mail: marco.rodriguez01@cfe.gob.mx
2009-01-15
Minimal thermodynamic conditions in the Cerro Prieto geothermal reservoir for steam production are defined, taking into account the minimal acceptable steam production at the surface, considering a rank of mixed-enthalpies for different well-depths, and allowing proper assessments for the impacts of the changes in fluid reservoir pressure and enthalpy. Factors able to influence steam production are discussed. They have to be considered when deciding whether or not to drill or repair a well in a particular area of the reservoir. These evaluations become much more relevant by considering the huge thermodynamic changes that have occurred at the Cerro Prieto geothermal reservoir from its development, starting in 1973, which has lead to abandoning some steam producing areas in the field. [Spanish] Las condiciones termodinamicas minimas del yacimiento geotermico de Cerro Prieto, BC, para producir vapor se determinan tomando en cuenta la minima produccion de vapor aceptable en superficie, considerando un rango de entalpias de la mezcla y para diferentes profundidades de pozos, lo que permite valorar adecuadamente el impacto de la evolucion de la presion y entalpia del fluido en el yacimiento. Se discuten los factores que pueden afectar la produccion de vapor, los cuales se deben tomar en cuenta para determinar la conveniencia o no de perforar o reparar un pozo en determinada zona del yacimiento. Estas evaluaciones adquieren gran relevancia al considerar los enormes cambios termodinamicos que ha presentado el yacimiento geotermico de Cerro Prieto, como resultado de su explotacion iniciada en 1973, lo que ha llevado a abandonar algunas zonas del campo para la produccion de vapor. Palabras Clave: Cerro Prieto, entalpia, evaluacion de yacimientos, politicas de explotacion, presion, produccion de vapor.
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
李爱民; 江金环; 李子平
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account, the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
LiAi-Min; JiangJin-Huan; LiZi-Ping
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account,the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
Remarks on a benchmark nonlinear constrained optimization problem
Institute of Scientific and Technical Information of China (English)
Luo Yazhong; Lei Yongjun; Tang Guojin
2006-01-01
Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.
Minimal flavour violation and anomalous top decays
Energy Technology Data Exchange (ETDEWEB)
Faller, Sven; Mannel, Thomas [Theoretische Physik 1, Department Physik, Universitaet Siegen, D-57068 Siegen (Germany); Gadatsch, Stefan [Nikhef, National Institute for Subatomatic Physics, P.O. Box 41882, 1009 Amsterdam (Netherlands)
2013-07-01
Any experimental evidence of anomalous top-quark couplings will open a window to study physics beyond the standard model (SM). However, all current flavour data indicate that nature is close to ''minimal flavour violation'', i.e. the pattern of flavour violation is given by the CKM matrix, including the hierarchy of parameters. In this talk we present results of the conceptual test of minimal flavour violation for the anomalous charged as well as flavour changing top-quark couplings. Our analysis is embedded in two-Higgs doublet model of type II (2HDM-II). Including renormalization effects, we calculate the top decay rates taking into account anomalous couplings constrained by minimal flavour violation.
Locally minimal topological groups
Außenhofer, Lydia; Chasco, María Jesús; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the ...
Multivariable controller for discrete stochastic amplitude-constrained systems
Directory of Open Access Journals (Sweden)
Hannu T. Toivonen
1983-04-01
Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.
Giribet, Gaston; Vásquez, Yerko
2015-01-01
Minimal massive gravity (MMG) is an extension of three-dimensional topologically massive gravity that, when formulated about anti-de Sitter space, accomplishes solving the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the nonlinear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Bañados-Teitelboim-Zanelli black holes. These geometries behave asymptotically as solutions of the so-called log gravity, and, despite the weakened falling off close to the boundary, they have finite mass and finite angular momentum, which we compute. We also find time-dependent deformations of Bañados-Teitelboim-Zanelli that obey Brown-Henneaux asymptotic boundary conditions. The existence of such solutions shows that the Birkhoff theorem does not hold in MMG at the chiral point. Other peculiar features of the theory at the chiral point, such as the degeneracy it exhibits in the decoupling limit, are discussed.
Rank-sparsity constrained, spectro-temporal reconstruction for retrospectively gated, dynamic CT
Clark, D. P.; Lee, C. L.; Kirsch, D. G.; Badea, C. T.
2015-03-01
Relative to prospective projection gating, retrospective projection gating for dynamic CT applications allows fast imaging times, minimizing the potential for physiological and anatomic variability. Preclinically, fast imaging is attractive due to the rapid clearance of low molecular weight contrast agents and the rapid heart rate of rodents. Clinically, retrospective gating is relevant for intraoperative C-arm CT. More generally, retrospective sampling provides an opportunity for significant reduction in x-ray dose within the framework of compressive sensing theory and sparsity-constrained iterative reconstruction. Even so, CT reconstruction from projections with random temporal sampling is a very poorly conditioned inverse problem, requiring high fidelity regularization to minimize variability in the reconstructed results. Here, we introduce a highly novel data acquisition and regularization strategy for spectro-temporal (5D) CT reconstruction from retrospectively gated projections. We show that by taking advantage of the rank-sparse structure and separability of the temporal and spectral reconstruction sub-problems, being able to solve each sub-problem independently effectively guarantees that we can solve both problems together. In this paper, we show 4D simulation results (2D + 2 energies + time) using the proposed technique and compare them with two competing techniques— spatio-temporal total variation minimization and prior image constrained compressed sensing. We also show in vivo, 5D (3D + 2 energies + time) myocardial injury data acquired in a mouse, reconstructing 20 data sets (10 phases, 2 energies) and performing material decomposition from data acquired over a single rotation (360°, dose: ~60 mGy).
Anomalies of minimal { N }=(0,1) and { N }=(0,2) sigma models on homogeneous spaces
Chen, Jin; Cui, Xiaoyi; Shifman, Mikhail; Vainshtein, Arkady
2017-01-01
We study chiral anomalies in { N }=(0,1) and (0,2) two-dimensional minimal sigma models defined on the generic homogeneous spaces G/H. Such minimal theories contain only (left) chiral fermions and in certain cases are inconsistent because of ‘incurable’ anomalies. We explicitly calculate the anomalous fermionic effective action and show how to remedy it by adding a series of local counterterms. In this procedure, we derive a local anomaly matching condition, which is demonstrated to be equivalent to the well-known global topological constraint on {p}1(G/H), the first Pontryagin class. More importantly, we show that these local counterterms further modify and constrain ‘curable’ chiral models, some of which, for example, flow to the nontrivial infrared superconformal fixed point. Finally, we also observe an interesting relation between { N }=(0,1) and (0,2) two-dimensional minimal sigma models and supersymmetric gauge theories.
Khimshiashvili, G.; Siersma, D.
2001-01-01
We describe the structure of minimal round functions on closed surfaces and three-folds. The minimal possible number of critical loops is determined and typical non-equisingular round function germs are interpreted in the spirit of isolated line singularities. We also discuss a version of Lusternik-
Lightweight cryptography for constrained devices
DEFF Research Database (Denmark)
Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco
2014-01-01
Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...
Torsional Rigidity of Minimal Submanifolds
DEFF Research Database (Denmark)
Markvorsen, Steen; Palmer, Vicente
2006-01-01
We prove explicit upper bounds for the torsional rigidity of extrinsic domains of minimal submanifolds $P^m$ in ambient Riemannian manifolds $N^n$ with a pole $p$. The upper bounds are given in terms of the torsional rigidities of corresponding Schwarz symmetrizations of the domains in warped...... for the torsional rigidity are actually attained and give conditions under which the geometric average of the stochastic mean exit time for Brownian motion at infinity is finite....
Institute of Scientific and Technical Information of China (English)
ZHU Detong
2006-01-01
In this paper,we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization.By performing the QR decomposition of an affine scaling equality constraint matrix,the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint.By using both the trust-region strategy and the line-search technique,each iteration switches to a backtracking interior point step generated by the trustregion subproblem.The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions.A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases.
W' and Z' limits for Minimal Walking Technicolor
DEFF Research Database (Denmark)
R. Andersen, Jeppe; Hapola, Tuomas; Sannino, Francesco
2012-01-01
We interpret the recent data on non-observation of Z'- and W'-bosons, reported by CMS, within Minimal Walking Technicolor models and use them to constrain the couplings and spectrum of the theory. We provide the reach for both exclusion and possible observation for the LHC with 5 fb^-1 at 7 Te...
What is minimally invasive dentistry?
Ericson, Dan
2004-01-01
Minimally Invasive Dentistry is the application of "a systematic respect for the original tissue." This implies that the dental profession recognizes that an artifact is of less biological value than the original healthy tissue. Minimally invasive dentistry is a concept that can embrace all aspects of the profession. The common delineator is tissue preservation, preferably by preventing disease from occurring and intercepting its progress, but also removing and replacing with as little tissue loss as possible. It does not suggest that we make small fillings to restore incipient lesions or surgically remove impacted third molars without symptoms as routine procedures. The introduction of predictable adhesive technologies has led to a giant leap in interest in minimally invasive dentistry. The concept bridges the traditional gap between prevention and surgical procedures, which is just what dentistry needs today. The evidence-base for survival of restorations clearly indicates that restoring teeth is a temporary palliative measure that is doomed to fail if the disease that caused the condition is not addressed properly. Today, the means, motives and opportunities for minimally invasive dentistry are at hand, but incentives are definitely lacking. Patients and third parties seem to be convinced that the only things that count are replacements. Namely, they are prepared to pay for a filling but not for a procedure that can help avoid having one.
... get worse You develop new symptoms, including side effects from the medicines used to treat the disorder Alternative Names Minimal change nephrotic syndrome; Nil disease; Lipoid nephrosis; Idiopathic nephrotic syndrome of childhood Images ...
Energy Technology Data Exchange (ETDEWEB)
Peyton, B.W.
1999-07-01
When minimum orderings proved too difficult to deal with, Rose, Tarjan, and Leuker instead studied minimal orderings and how to compute them (Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput., 5:266-283, 1976). This paper introduces an algorithm that is capable of computing much better minimal orderings much more efficiently than the algorithm in Rose et al. The new insight is a way to use certain structures and concepts from modern sparse Cholesky solvers to re-express one of the basic results in Rose et al. The new algorithm begins with any initial ordering and then refines it until a minimal ordering is obtained. it is simple to obtain high-quality low-cost minimal orderings by using fill-reducing heuristic orderings as initial orderings for the algorithm. We examine several such initial orderings in some detail.
Gonzalez-Lopez, Jesus E Garcia Veronica A
2010-01-01
In this work we introduce a new and richer class of finite order Markov chain models and address the following model selection problem: find the Markov model with the minimal set of parameters (minimal Markov model) which is necessary to represent a source as a Markov chain of finite order. Let us call $M$ the order of the chain and $A$ the finite alphabet, to determine the minimal Markov model, we define an equivalence relation on the state space $A^{M}$, such that all the sequences of size $M$ with the same transition probabilities are put in the same category. In this way we have one set of $(|A|-1)$ transition probabilities for each category, obtaining a model with a minimal number of parameters. We show that the model can be selected consistently using the Bayesian information criterion.
Ruled Laguerre minimal surfaces
Skopenkov, Mikhail
2011-10-30
A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.
Trends in PDE constrained optimization
Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan
2014-01-01
Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics. The book is divided into five sections on “Constrained Optimization, Identification and Control”...
TAXATION AND FINANCE CONSTRAINED FIRMS
Iris Claus
2006-01-01
This paper develops an open economy model to assess the long-run effects of taxation where firms are finance constrained. Finance constraints arise because of imperfect information between borrowers and lenders. Only borowers (firms) can costlessly observe actual returns from production. Imperfect information and finance constraints magnify the effects of taxation. A reduction (rise) in income taxation increases (lowers) firms' internal funds and their ability to assess external finance to ex...
Institute of Scientific and Technical Information of China (English)
胡邓平; 泽军; 吴爱华; 刘湛
2015-01-01
针对带有网格的框形薄壁注塑件容易出现缩痕的问题，开展基于缩痕最小的空调面框注塑成型工艺参数优化研究。首先构建空调面框三维几何模型，设计浇道系统和冷却流道，在运用Moldflow数值模拟和四水平正交试验L16(45)的基础上，以注射时间、模具温度、熔体温度、相对保压压力、保压时间为设计变量，采用极差分析和方差分析得到各参数对缩痕指数的影响程度，并获得了最优的工艺参数组合，其缩痕指数降低为2.159%，最后通过注塑成型试验验证该方法的有效性。这为框形薄壁注塑件低成本高质量设计提供了一种新的途径。%Aimed at the problem that the sink marks is easy to appear in the frame thin-wall plastic products with grid,the research on the optimization of injection molding process parameters for air conditioning plane frame based on the minimization of sink indexis was carried out. First,the 3D model of air conditioning plane frame was built,the gating system and cooling flow channel were designed. Based on the use of moldflow simulation and four levels orthogonal experimentL16(45),injection time, mold temperature,melt temperature,relative packing pressure and packing time were designed as variables. The influence of various parameters on the sink mark index were obtained by using the range analysis and variance analysis,and the optimal process parameters combination were obtained,the sink mark index was decreased to 2.159%. Finally the method was confirmed to be effective by using injection molding experiments. It provides a new way of low-cost and high-quality design for the frame thin-wall plastic products.
Delivery Time Reduction for Order-Constrained Applications using Binary Network Codes
Douik, Ahmed; Karim, Mohammad S.; Sadeghi, Parastoo; Sorour, Sameh
2016-01-01
Consider a radio access network wherein a base-station is required to deliver a set of order-constrained messages to a set of users over independent erasure channels. This paper studies the delivery time reduction problem using instantly decodable network coding (IDNC). Motivated by time-critical and order-constrained applications, the delivery time is defined, at each transmission, as the number of undelivered messages. The delivery time minimization problem being computationally intractable...
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
THE MINIMAL OPERATOR AND WEIGHTED INEQUALITIES FOR MARTINGALES
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this article the authors introduce the minimal operator on martingale spaces, discuss some one-weight and two-weight inequalities for the minimal operator and characterize the conditions which make the inequalities hold.
Constrained Transport vs. Divergence Cleanser Options in Astrophysical MHD Simulations
Lindner, Christopher C.; Fragile, P.
2009-01-01
In previous work, we presented results from global numerical simulations of the evolution of black hole accretion disks using the Cosmos++ GRMHD code. In those simulations we solved the magnetic induction equation using an advection-split form, which is known not to satisfy the divergence-free constraint. To minimize the build-up of divergence error, we used a hyperbolic cleanser function that simultaneously damped the error and propagated it off the grid. We have since found that this method produces qualitatively and quantitatively different behavior in high magnetic field regions than results published by other research groups, particularly in the evacuated funnels of black-hole accretion disks where Poynting-flux jets are reported to form. The main difference between our earlier work and that of our competitors is their use of constrained-transport schemes to preserve a divergence-free magnetic field. Therefore, to study these differences directly, we have implemented a constrained transport scheme into Cosmos++. Because Cosmos++ uses a zone-centered, finite-volume method, we can not use the traditional staggered-mesh constrained transport scheme of Evans & Hawley. Instead we must implement a more general scheme; we chose the Flux-CT scheme as described by Toth. Here we present comparisons of results using the divergence-cleanser and constrained transport options in Cosmos++.
Damping characteristics of active-passive hybrid constrained-layer treated beam structures
Liu, Yanning; Wang, Kon-Well
2000-04-01
A new configuration of surface damping treatments, Active- Passive Hybrid Constrained Layer (HCL) damping, is analyzed and experimentally investigated. The purpose is to improve the performance of the current active constrained layer (ACL) and passive constrained layer (PCL) treatments by mixing passive and active materials in the constraining layer. In HCL, the viscoelastic material is constrained by an active-passive hybrid constraining layer -- the active part is made of PZT ceramics, and the passive part can be selected by the designer to meet different requirements, such as higher damping performance or lighter weight. The active and passive constraining parts are mechanically connected such that the displacement and force are continuous at the connecting surface, but are isolated electrically so the passive constraining part will not affect the function of its active counterpart. Following a generic study of the HCL concept by the authors earlier, the purpose of this paper is to illustrate and validate the HCL performance through both numerical and experimental investigations on a beam structure. The governing equations and boundary conditions of an HCL treated beam are derived and a finite element model is formulated. Tabletop tests with cantilever beam specimens are used for the experimental study. The new hybrid constrained layer is found to have the advantages of both ACL and PCL. By selecting a stiffer passive constraining material and an optimal active-to-passive length ratio, the HCL can achieve better closed-loop and open-loop performances than the treatment with a pure active constraining layer.
Constrained Stochastic Extended Redundancy Analysis.
DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco
2015-06-01
We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).
Enablers and constrainers to participation
DEFF Research Database (Denmark)
Desjardins, Richard; Milana, Marcella
2007-01-01
with constraining and enabling elements so as to raise participation among otherwise disadvantaged groups. To begin addressing this question, consideration is given to different types of constraints and different types of policies. These are brought together within a broad demand and supply framework, so...... as to construct a tool for analyzing the targeting of adult learning policy, with regard to both its coverage and expected consequences. Our aim is to develop a means for a more in-depth analysis of the match-mismatch of public policy and persisting constraints to participation....
Constraining Lorentz violation with cosmology.
Zuntz, J A; Ferreira, P G; Zlosnik, T G
2008-12-31
The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities.
Constrained and regularized system identification
Directory of Open Access Journals (Sweden)
Tor A. Johansen
1998-04-01
Full Text Available Prior knowledge can be introduced into system identification problems in terms of constraints on the parameter space, or regularizing penalty functions in a prediction error criterion. The contribution of this work is mainly an extension of the well known FPE (Final Production Error statistic to the case when the system identification problem is constrained and contains a regularization penalty. The FPECR statistic (Final Production Error with Constraints and Regularization is of potential interest as a criterion for selection of both regularization parameters and structural parameters such as order.
Constrained spheroids for prolonged hepatocyte culture.
Tong, Wen Hao; Fang, Yu; Yan, Jie; Hong, Xin; Hari Singh, Nisha; Wang, Shu Rui; Nugraha, Bramasta; Xia, Lei; Fong, Eliza Li Shan; Iliescu, Ciprian; Yu, Hanry
2016-02-01
Liver-specific functions in primary hepatocytes can be maintained over extended duration in vitro using spheroid culture. However, the undesired loss of cells over time is still a major unaddressed problem, which consequently generates large variations in downstream assays such as drug screening. In static culture, the turbulence generated by medium change can cause spheroids to detach from the culture substrate. Under perfusion, the momentum generated by Stokes force similarly results in spheroid detachment. To overcome this problem, we developed a Constrained Spheroids (CS) culture system that immobilizes spheroids between a glass coverslip and an ultra-thin porous Parylene C membrane, both surface-modified with poly(ethylene glycol) and galactose ligands for optimum spheroid formation and maintenance. In this configuration, cell loss was minimized even when perfusion was introduced. When compared to the standard collagen sandwich model, hepatocytes cultured as CS under perfusion exhibited significantly enhanced hepatocyte functions such as urea secretion, and CYP1A1 and CYP3A2 metabolic activity. We propose the use of the CS culture as an improved culture platform to current hepatocyte spheroid-based culture systems.
Should we still believe in constrained supersymmetry?
Balázs, Csaba; Carter, Daniel; Farmer, Benjamin; White, Martin
2012-01-01
We calculate Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with $M_0$ and $M_{1/2}$ below 2 TeV), reducing its posterior odds by a factor of approximately ...
Constraining New Physics with D meson decays
Energy Technology Data Exchange (ETDEWEB)
Barranco, J.; Delepine, D.; Gonzalez Macias, V. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Lopez-Lozano, L. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Área Académica de Matemáticas y Física, Universidad Autónoma del Estado de Hidalgo, Carr. Pachuca-Tulancingo Km. 4.5, C.P. 42184, Pachuca, HGO (Mexico)
2014-04-04
Latest Lattice results on D form factors evaluation from first principles show that the Standard Model (SM) branching ratios prediction for the leptonic D{sub s}→ℓν{sub ℓ} decays and the semileptonic SM branching ratios of the D{sup 0} and D{sup +} meson decays are in good agreement with the world average experimental measurements. It is possible to disprove New Physics hypothesis or find bounds over several models beyond the SM. Using the observed leptonic and semileptonic branching ratios for the D meson decays, we performed a combined analysis to constrain non-standard interactions which mediate the cs{sup ¯}→lν{sup ¯} transition. This is done either by a model-independent way through the corresponding Wilson coefficients or in a model-dependent way by finding the respective bounds over the relevant parameters for some models beyond the Standard Model. In particular, we obtain bounds for the Two Higgs Doublet Model Type-II and Type III, the Left–Right model, the Minimal Supersymmetric Standard Model with explicit R-parity violation and Leptoquarks. Finally, we estimate the transverse polarization of the lepton in the D{sup 0} decay and we found it can be as high as P{sub T}=0.23.
iBGP and Constrained Connectivity
Dinitz, Michael
2011-01-01
We initiate the theoretical study of the problem of minimizing the size of an iBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than $\\Omega(\\log n)$ and provide approximation algorithms with ratio $\\tilde{O}(\\sqrt{n})$. In addition, we give a slightly worse $\\tilde{O}(n^{2/3})$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems. The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph $G=(V,E)$, and for every pair of vertices $u,v \\in V$ we are given a set $S(u,...
Rose, Sean; Sidky, Emil Y; Pan, Xiaochuan
2016-01-01
This article is intended to supplement our 2015 paper in Medical Physics titled "Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization", in which ordered subsets methods were employed to perform total-variation constrained data-discrepancy minimization for image reconstruction in X-ray computed tomography. Here we provide details regarding implementation of the ordered subsets algorithms and suggestions for selection of algorithm parameters. Detailed pseudo-code is included for every algorithm implemented in the original manuscript.
Minimal dispersion refractive index profiles.
Feit, M D
1979-09-01
The analogy between optics and quantum mechanics is exploited by considering a 2-D quantum system whose Schroedinger equation is closely related to the wave equation for light propagation in an optical fiber. From this viewpoint, Marcatili's condition for minimal-dispersion-refractive-index profiles, and the Olshansky- Keck formula for rms pulse spreading in an alpha-profile fiber may be derived without recourse to the WKB approximation. Besides affording physical insight into these results, the present approach points out a possible limitation in their application to real fibers.
Risk minimization through portfolio replication
Ciliberti, S.; Mã©Zard, M.
2007-05-01
We use a replica approach to deal with portfolio optimization problems. A given risk measure is minimized using empirical estimates of asset values correlations. We study the phase transition which happens when the time series is too short with respect to the size of the portfolio. We also study the noise sensitivity of portfolio allocation when this transition is approached. We consider explicitely the cases where the absolute deviation and the conditional value-at-risk are chosen as a risk measure. We show how the replica method can study a wide range of risk measures, and deal with various types of time series correlations, including realistic ones with volatility clustering.
Minimally invasive periodontal therapy.
Dannan, Aous
2011-10-01
Minimally invasive dentistry is a concept that preserves dentition and supporting structures. However, minimally invasive procedures in periodontal treatment are supposed to be limited within periodontal surgery, the aim of which is to represent alternative approaches developed to allow less extensive manipulation of surrounding tissues than conventional procedures, while accomplishing the same objectives. In this review, the concept of minimally invasive periodontal surgery (MIPS) is firstly explained. An electronic search for all studies regarding efficacy and effectiveness of MIPS between 2001 and 2009 was conducted. For this purpose, suitable key words from Medical Subject Headings on PubMed were used to extract the required studies. All studies are demonstrated and important results are concluded. Preliminary data from case cohorts and from many studies reveal that the microsurgical access flap, in terms of MIPS, has a high potential to seal the healing wound from the contaminated oral environment by achieving and maintaining primary closure. Soft tissues are mostly preserved and minimal gingival recession is observed, an important feature to meet the demands of the patient and the clinician in the esthetic zone. However, although the potential efficacy of MIPS in the treatment of deep intrabony defects has been proved, larger studies are required to confirm and extend the reported positive preliminary outcomes.
Logarithmic Superconformal Minimal Models
Pearce, Paul A; Tartaglia, Elena
2013-01-01
The higher fusion level logarithmic minimal models LM(P,P';n) have recently been constructed as the diagonal GKO cosets (A_1^{(1)})_k oplus (A_1^{(1)})_n / (A_1^{(1)})_{k+n} where n>0 is an integer fusion level and k=nP/(P'-P)-2 is a fractional level. For n=1, these are the logarithmic minimal models LM(P,P'). For n>1, we argue that these critical theories are realized on the lattice by n x n fusion of the n=1 models. For n=2, we call them logarithmic superconformal minimal models LSM(p,p') where P=|2p-p'|, P'=p' and p,p' are coprime, and they share the central charges of the rational superconformal minimal models SM(P,P'). Their mathematical description entails the fused planar Temperley-Lieb algebra which is a spin-1 BMW tangle algebra with loop fugacity beta_2=x^2+1+x^{-2} and twist omega=x^4 where x=e^{i(p'-p)pi/p'}. Examples are superconformal dense polymers LSM(2,3) with c=-5/2, beta_2=0 and superconformal percolation LSM(3,4) with c=0, beta_2=1. We calculate the free energies analytically. By numerical...
Prostate resection - minimally invasive
... invasive URL of this page: //medlineplus.gov/ency/article/007415.htm Prostate resection - minimally invasive To use ... into your bladder instead of out through the urethra ( retrograde ... on New Developments in Prostate Cancer and Prostate Diseases. Evaluation and treatment of lower ...
Constrained traffic regulation in variable-length packet networks
Karumanchi, Ashok; Varadarajan, Sridhar; Rao, Kalyan; Talabattula, Srinivas
2004-02-01
The availability of high bandwidth in optical networks coupled with the evolution of applications such as video on demand and telemedicine create a clear need for providing quality-of-service (QoS) guarantees in optical networks. Proliferation of the IP-over-WDM model in these networks requires the network to provide QoS guarantees for variable-length packets. In this context, we address the problem of constrained traffic regulation--traffic regulation with buffer and delay constraints--in variable-length packet networks. We use the filtering theory under max-plus (max, +) algebra to address this problem. For a constrained traffic-regulation problem with maximum tolerable delay and maximum buffer size, the traffic regulator that generates g-regular output traffic minimizing the number of discarded packets is a concatenation of the f clipper and the minimal g regulator. f is a function of g, maximum delay, and maximum buffer size. The f clipper is a bufferless device, which drops the packets as necessary so that its output is f regular. The minimal g regulator is a buffered device that delays packets as necessary so that its output is g regular. The g regulator is a linear shift-invariant filter with impulse response g, under the (max, +) algebra.
Energy Technology Data Exchange (ETDEWEB)
Capella, Antonio [Instituto de Matematicas, Universidad Nacional Autonoma de Mexico (Mexico); Mueller, Stefan [Hausdorff Center for Mathematics and Institute for Applied Mathematics, Universitaet Bonn (Germany); Otto, Felix [Max Planck Institute for Mathematics in the Sciences, Leipzig (Germany)
2012-08-15
A mathematical description of transformation processes in magnetic shape memory alloys (MSMA) under applied stresses and external magnetic fields needs a combination of micromagnetics and continuum elasticity theory. In this note, we discuss the so-called constrained theories, i.e., models where the state described by the pair (linear strain, magnetization) is at every point of the sample constrained to assume one of only finitely many values (that reflect the material symmetries). Furthermore, we focus on large body limits, i.e., models that are formulated in terms of (local) averages of a microstructured state, as the one proposed by DeSimone and James. We argue that the effect of an interfacial energy associated with the twin boundaries survives on the level of the large body limit in form of a (local) rigidity of twins. This leads to an alternative (i.e., with respect to reference 1) large body limit. The new model has the advantage of qualitatively explaining the occurrence of a microstructure with charged magnetic walls, as observed in SPP experiments in reference 2. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Maximum entropy production: can it be used to constrain conceptual hydrological models?
Directory of Open Access Journals (Sweden)
M. C. Westhoff
2013-08-01
Full Text Available In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in literature, generally little guidance has been given on how to apply the principle. The aim of this paper is to use the maximum power principle – which is closely related to MEP – to constrain parameters of a simple conceptual (bucket model. Although, we had to conclude that conceptual bucket models could not be constrained with respect to maximum power, this study sheds more light on how to use and how not to use the principle. Several of these issues have been correctly applied in other studies, but have not been explained or discussed as such. While other studies were based on resistance formulations, where the quantity to be optimized is a linear function of the resistance to be identified, our study shows that the approach also works for formulations that are only linear in the log-transformed space. Moreover, we showed that parameters describing process thresholds or influencing boundary conditions cannot be constrained. We furthermore conclude that, in order to apply the principle correctly, the model should be (1 physically based; i.e. fluxes should be defined as a gradient divided by a resistance, (2 the optimized flux should have a feedback on the gradient; i.e. the influence of boundary conditions on gradients should be minimal, (3 the temporal scale of the model should be chosen in such a way that the parameter that is optimized is constant over the modelling period, (4 only when the correct feedbacks are implemented the fluxes can be correctly optimized and (5 there should be a trade-off between two or more fluxes. Although our application of the maximum power principle did
How well do different tracers constrain the firn diffusivity profile?
Directory of Open Access Journals (Sweden)
C. M. Trudinger
2013-02-01
Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH_{3}CCl_{3}, HFCs and ^{14}CO_{2} are most useful for constraining molecular diffusivity, while &delta:^{15}N_{2} is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO_{2} age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.
How well do different tracers constrain the firn diffusivity profile?
Directory of Open Access Journals (Sweden)
C. M. Trudinger
2012-07-01
Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in some cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH_{3}CCl_{3}, HFCs and ^{14}CO_{2} are most useful for constraining molecular diffusivity, while δ^{15}N_{2} is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO_{2} age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a single firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to allow quantification of the uncertainties.
Use of Traffic Intent Information by Autonomous Aircraft in Constrained Operations
Wing, David J.; Barmore, Bryan E.; Krishnamurthy, Karthik
2002-01-01
This paper presents findings of a research study designed to provide insight into the issue of intent information exchange in constrained en-route air-traffic operations and its effect on pilot decision-making and flight performance. The piloted simulation was conducted in the Air Traffic Operations Laboratory at the NASA Langley Research Center. Two operational modes for autonomous flight management were compared under conditions of low and high operational complexity (traffic and airspace hazard density). The tactical mode was characterized primarily by the use of traffic state data for conflict detection and resolution and a manual approach to meeting operational constraints. The strategic mode involved the combined use of traffic state and intent information, provided the pilot an additional level of alerting, and allowed an automated approach to meeting operational constraints. Operational constraints applied in the experiment included separation assurance, schedule adherence, airspace hazard avoidance, flight efficiency, and passenger comfort. The strategic operational mode was found to be effective in reducing unnecessary maneuvering in conflict situations where the intruder's intended maneuvers would resolve the conflict. Conditions of high operational complexity and vertical maneuvering resulted in increased proliferation of conflicts, but both operational modes exhibited characteristics of stability based on observed conflict proliferation rates of less than 30 percent. Scenario case studies illustrated the need for maneuver flight restrictions to prevent the creation of new conflicts through maneuvering and the need for an improved user interface design that appropriately focuses the pilot's attention on conflict prevention information. Pilot real-time assessment of maximum workload indicated minimal sensitivity to operational complexity, providing further evidence that pilot workload is not the limiting factor for feasibility of an en-route distributed
A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization
Directory of Open Access Journals (Sweden)
Zhijun Luo
2014-01-01
Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.
Minimal hepatic encephalopathy.
Zamora Nava, Luis Eduardo; Torre Delgadillo, Aldo
2011-06-01
The term minimal hepatic encephalopathy (MHE) refers to the subtle changes in cognitive function, electrophysiological parameters, cerebral neurochemical/neurotransmitter homeostasis, cerebral blood flow, metabolism, and fluid homeostasis that can be observed in patients with cirrhosis who have no clinical evidence of hepatic encephalopathy; the prevalence is as high as 84% in patients with hepatic cirrhosis. Physician does generally not perceive cirrhosis complications, and neuropsychological tests and another especial measurement like evoked potentials and image studies like positron emission tomography can only make diagnosis. Diagnosis of minimal hepatic encephalopathy may have prognostic and therapeutic implications in cirrhotic patients. The present review pretends to explore the clinic, therapeutic, diagnosis and prognostic aspects of this complication.
Minimal triangulations of simplotopes
Seacrest, Tyler
2009-01-01
We derive lower bounds for the size of simplicial covers of simplotopes, which are products of simplices. These also serve as lower bounds for triangulations of such polytopes, including triangulations with interior vertices. We establish that a minimal triangulation of a product of two simplices is given by a vertex triangulation, i.e., one without interior vertices. For products of more than two simplices, we produce bounds for products of segments and triangles. Our analysis yields linear programs that arise from considerations of covering exterior faces and exploiting the product structure of these polytopes. Aside from cubes, these are the first known lower bounds for triangulations of simplotopes with three or more factors. We also construct a minimal triangulation for the product of a triangle and a square, and compare it to our lower bound.
DEFF Research Database (Denmark)
Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco
2011-01-01
We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity......, and that the underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....
Bachas, C; Wiese, K J; Bachas, Constantin; Doussal, Pierre Le; Wiese, Kay Joerg
2006-01-01
We study minimal surfaces which arise in wetting and capillarity phenomena. Using conformal coordinates, we reduce the problem to a set of coupled boundary equations for the contact line of the fluid surface, and then derive simple diagrammatic rules to calculate the non-linear corrections to the Joanny-de Gennes energy. We argue that perturbation theory is quasi-local, i.e. that all geometric length scales of the fluid container decouple from the short-wavelength deformations of the contact line. This is illustrated by a calculation of the linearized interaction between contact lines on two opposite parallel walls. We present a simple algorithm to compute the minimal surface and its energy based on these ideas. We also point out the intriguing singularities that arise in the Legendre transformation from the pure Dirichlet to the mixed Dirichlet-Neumann problem.
Allanach, B C; Tunstall, Lewis C; Voigt, A; Williams, A G
2013-01-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a $\\mathbb{Z}_{3}$ symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general $\\mathbb{Z}_{3}$ violating (denoted as $\\,\\mathbf{\\backslash}\\mkern-11.0mu{\\mathbb{Z}}_{3}$) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper se...
On Minimal Constraint Networks
Gottlob, Georg
2011-01-01
In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP complete. We prove this conjecture true and show that the problem remains NP hard even in case the total domain of all values that may appear in the constraint relations is bounded by a constant.
Constrained Allocation Flux Balance Analysis
Mori, Matteo; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo
2016-01-01
New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferr...
Bagging constrained equity premium predictors
DEFF Research Database (Denmark)
Hillebrand, Eric; Lee, Tae-Hwy; Medeiros, Marcelo
2014-01-01
The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the r......The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity...... of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock...
Exploring constrained quantum control landscapes
Moore, Katharine W.; Rabitz, Herschel
2012-10-01
The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of
Formal language constrained path problems
Energy Technology Data Exchange (ETDEWEB)
Barrett, C.; Jacob, R.; Marathe, M.
1997-07-08
In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.
Seth, Punit P; Siwkowski, Andrew; Allerson, Charles R; Vasquez, Guillermo; Lee, Sam; Prakash, Thazha P; Kinberger, Garth; Migawa, Michael T; Gaus, Hans; Bhat, Balkrishen; Swayze, Eric E
2008-01-01
Antisense drug discovery technology is a powerful method to modulate gene expression in animals and represents a novel therapeutic platform.(1) We have previously demonstrated that replacing 2'O-methoxyethyl (MOE, 2) residues in second generation antisense oligonucleotides (ASOs) with LNA (3) nucleosides improves the potency of some ASOs in animals. However, this was accompanied with a significant increase in the risk for hepatotoxicity.(2) We hypothesized that replacing LNA with novel nucleoside monomers that combine the structural elements of MOE and LNA might mitigate the toxicity of LNA while maintaining potency. To this end we designed and prepared novel nucleoside analogs 4 (S-constrained MOE, S-cMOE) and 5 (R-constrained MOE, R-cMOE) where the ethyl chain of the 2'O-MOE moiety is constrained back to the 4' position of the furanose ring. As part of the SAR series, we also prepared nucleoside analogs 7 (S-constrained ethyl, S-cEt) and 8 (R-constrained Ethyl, R-cEt) where the methoxymethyl group in the cMOE nucleosides was replaced with a methyl substituent. A highly efficient synthesis of the nucleoside phosphoramidites with minimal chromatography purifications was developed starting from cheap commercially available starting materials. Biophysical evaluation revealed that the cMOE and cEt modifications hybridize complementary nucleic acids with the same affinity as LNA while greatly increasing nuclease stability. Biological evaluation of oligonucleotides containing the cMOE and cEt modification in animals indicated that all of them possessed superior potency as compared to second generation MOE ASOs and a greatly improved toxicity profile as compared to LNA.
Eulerian Formulation of Spatially Constrained Elastic Rods
Huynen, Alexandre
Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration
Power Absorption by Closely Spaced Point Absorbers in Constrained Conditions
DEFF Research Database (Denmark)
De Backer, G.; Vantorre, M.; Beels, C.;
2010-01-01
The performance of an array of closely spaced point absorbers is numerically assessed in a frequency domain model Each point absorber is restricted to the heave mode and is assumed to have its own linear power take-off (PTO) system Unidirectional irregular incident waves are considered, represent......The performance of an array of closely spaced point absorbers is numerically assessed in a frequency domain model Each point absorber is restricted to the heave mode and is assumed to have its own linear power take-off (PTO) system Unidirectional irregular incident waves are considered......, representing the wave climate at Westhinder on the Belgian Continental Shelf The impact of slamming, stroke and force restrictions on the power absorption is evaluated and optimal PTO parameters are determined For multiple bodies optimal control parameters (CP) are not only dependent on the incoming waves...
Energy Constrained Hierarchical Task Scheduling Algorithm for Mobile Grids
Directory of Open Access Journals (Sweden)
Arjun Singh
2014-05-01
Full Text Available In mobile grids, scheduling the computation tasks and the communication transactions onto the target architecture is the important problem when a mobile grid environment and a pre-selected architecture are given. Even though the scheduling problem is a traditional topic, almost all previous work focuses on maximizing the performance through the scheduling process. The algorithms developed this way are not suitable for real-time embedded applications, in which the main objective is to minimize the energy consumption of the system under tight performance constraints. This paper entails an energy constrained hierarchical task scheduling algorithm for Mobile Grids to minimize the power consumption of the mobile nodes. The task is rescheduled when the mobile node moves beyond the transmission range. The performance is estimated based on the average delay and packet delivery ratio based on nodes and flows. The performance metrics are analysed using NS-2 simulator.
Shape minimization of the dissipated energy in dyadic trees
De La Sablonière, Xavier Dubois; Privat, Yannick
2010-01-01
In this paper, we study the role of boundary conditions on the optimal shape of a dyadic tree in which flows a Newtonian fluid. Our optimization problem consists in finding the shape of the tree that minimizes the viscous energy dissipated by the fluid with a constrained volume, under the assumption that the total flow of the fluid is conserved throughout the structure. These hypotheses model situations where a fluid is transported from a source towards a 3D domain into which the transport network also spans. Such situations could be encountered in organs like for instance the lungs and the vascular networks. Two fluid regimes are studied: (i) low flow regime (Poiseuille) in trees with an arbitrary number of generations using a matricial approach and (ii) non linear flow regime (Navier-Stokes, moderate regime with a Reynolds number $100$) in trees of two generations using shape derivatives in an augmented Lagrangian algorithm coupled with a 2D/3D finite elements code to solve Navier-Stokes equations. It relie...
Minimally Invasive Parathyroidectomy
Directory of Open Access Journals (Sweden)
Lee F. Starker
2011-01-01
Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.
Susič, Vasja
2016-06-01
A realistic model in the class of renormalizable supersymmetric E6 Grand Unified Theories is constructed. Its matter sector consists of 3 × 27 representations, while the Higgs sector is 27 +27 ¯+35 1'+35 1' ¯+78 . An analytic solution for a Standard Model vacuum is found and the Yukawa sector analyzed. It is argued that if one considers the increased predictability due to only two symmetric Yukawa matrices in this model, it can be considered a minimal SUSY E6 model with this type of matter sector. This contribution is based on Ref. [1].
C(M)LESS-THAN-1 STRING THEORY AS A CONSTRAINED TOPOLOGICAL SIGMA-MODEL
LLATAS, PM; ROY, S
1995-01-01
It has been argued by Ishikawa and Kato that by making use of a specific bosonization, c(M) = 1 string theory can be regarded as a constrained topological sigma model. We generalize their construction for any (p,q) minimal model coupled to two dimensional (2d) gravity and show that the energy-moment
The cost-constrained traveling salesman problem
Energy Technology Data Exchange (ETDEWEB)
Sokkappa, P.R.
1990-10-01
The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.
A TV-constrained decomposition method for spectral CT
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Constraining anisotropic models of early Universe with WMAP9 data
Ramazanov, Sabir
2013-01-01
We constrain several models of the early Universe that predict statistical anisotropy of the CMB sky. We make use of WMAP9 maps deconvolved with beam asymmetries. As compared to previous releases of WMAP data, they do not exhibit the anomalously large quadrupole of the statistical anisotropy. This allows to strengthen limits on parameters of models established earlier in literature. In particular, the amplitude of the special quadrupole, whose direction is aligned with ecliptic poles, is now constrained as g_* =0.002 \\pm 0.041 at 95% CL (\\pm 0.020 at 68% CL). The upper limit is obtained on the total number of e-folds in anisotropic inflation with the Maxwellian term non-minimally coupled to the inflaton, namely N_{tot}
Maity, Debaprasad
2016-01-01
In this paper we propose two simple minimal Higgs inflation scenarios through a simple modification of the Higgs potential, as opposed to the usual non-minimal Higgs-gravity coupling prescription. The modification is done in such a way that it creates a flat plateau for a huge range of field values at the inflationary energy scale $\\mu \\simeq (\\lambda)^{1/4} \\alpha$. Assuming the perturbative Higgs quartic coupling, $\\lambda \\simeq {\\cal O}(1)$, for both the models inflation energy scale turned out to be $\\mu \\simeq (10^{14}, 10^{15})$ GeV, and prediction of all the cosmologically relevant quantities, $(n_s,r,dn_s^k)$, fit extremely well with observations made by PLANCK. Considering observed central value of the scalar spectral index, $n_s= 0.968$, our two models predict efolding number, $N = (52,47)$. Within a wide range of viable parameter space, we found that the prediction of tensor to scalar ratio $r (\\leq 10^{-5})$ is far below the current experimental sensitivity to be observed in the near future. The ...
Minimal regular 2-graphs and applications
Institute of Scientific and Technical Information of China (English)
FAN; Hongbing; LIU; Guizhen; LIU; Jiping
2006-01-01
A 2-graph is a hypergraph with edge sizes of at most two. A regular 2-graph is said to be minimal if it does not contain a proper regular factor. Let f2(n) be the maximum value of degrees over all minimal regular 2-graphs of n vertices. In this paper, we provide a structure property of minimal regular 2-graphs, and consequently, prove that f2(n) = n+3-i/3where 1 ≤i≤6, i=n (mod 6) andn≥ 7, which solves a conjecture posed by Fan, Liu, Wu and Wong. As applications in graph theory, we are able to characterize unfactorable regular graphs and provide the best possible factor existence theorem on degree conditions. Moreover, f2(n) and the minimal 2-graphs can be used in the universal switch box designs, which originally motivated this study.
Biharmonic Maps and Laguerre Minimal Surfaces
Directory of Open Access Journals (Sweden)
Yusuf Abu Muhanna
2013-01-01
Full Text Available A Laguerre surface is known to be minimal if and only if its corresponding isotropic map is biharmonic. For every Laguerre surface Φ is its associated surface Ψ=1+u2Φ, where u lies in the unit disk. In this paper, the projection of the surface Ψ associated to a Laguerre minimal surface is shown to be biharmonic. A complete characterization of Ψ is obtained under the assumption that the corresponding isotropic map of the Laguerre minimal surface is harmonic. A sufficient and necessary condition is also derived for Ψ to be a graph. Estimates of the Gaussian curvature to the Laguerre minimal surface are obtained, and several illustrative examples are given.
Gyrification from constrained cortical expansion
Tallinen, Tuomas; Biggins, John S; Mahadevan, L
2015-01-01
The exterior of the mammalian brain - the cerebral cortex - has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highl...
Constrained Allocation Flux Balance Analysis
Mori, Matteo; Hwa, Terence; Martin, Olivier C.
2016-01-01
New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325
Constraining the Europa Neutral Torus
Smith, Howard T.; Mitchell, Donald; mauk, Barry; Johnson, Robert E.; clark, george
2016-10-01
"Neutral tori" consist of neutral particles that usually co-orbit along with their source forming a toroidal (or partial toroidal) feature around the planet. The distribution and composition of these features can often provide important, if not unique, insight into magnetospheric particles sources, mechanisms and dynamics. However, these features can often be difficult to directly detect. One innovative method for detecting neutral tori is by observing Energetic Neutral Atoms (ENAs) that are generally considered produced as a result of charge exchange interactions between charged and neutral particles.Mauk et al. (2003) reported the detection of a Europa neutral particle torus using ENA observations. The presence of a Europa torus has extremely large implications for upcoming missions to Jupiter as well as understanding possible activity at this moon and providing critical insight into what lies beneath the surface of this icy ocean world. However, ENAs can also be produced as a result of charge exchange interactions between two ionized particles and in that case cannot be used to infer the presence of neutral particle population. Thus, a detailed examination of all possible source interactions must be considered before one can confirm that likely original source population of these ENA images is actually a Europa neutral particle torus. For this talk, we examine the viability that the Mauk et al. (2003) observations were actually generated from a neutral torus emanating from Europa as opposed to charge particle interactions with plasma originating from Io. These results help constrain such a torus as well as Europa source processes.
Minimal Dilaton Model and the Diphoton Excess
Agarwal, Bakul; Mohan, Kirtimaan A
2016-01-01
In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.
Energy Technology Data Exchange (ETDEWEB)
Ji-Zheng Chu; Shyan-Shu Shieh; Shi-Shang Jang; Chuan-I Chien; Hou-Peng Wan; Hsu-Hsun Ko [Beijing University of Chemical Technology, Beijing (China). Department of Automation
2003-04-01
Combustion in a boiler is too complex to be analytically described with mathematical models. To meet the needs of operation optimization, on-site experiments guided by the statistical optimization methods are often necessary to achieve the optimum operating conditions. This study proposes a new constrained optimization procedure using artificial neural networks as models for target processes. Information analysis based on random search, fuzzy c-mean clustering, and minimization of information free energy is performed iteratively in the procedure to suggest the location of future experiments, which can greatly reduce the number of experiments needed. The effectiveness of the proposed procedure in searching optima is demonstrated by three case studies: (1) a bench-mark problem, namely minimization of the modified Himmelblau function under a circle constraint; (2) both minimization of NOx and CO emissions and maximization of thermal efficiency for a simulated combustion process of a boiler; (3) maximization of thermal efficiency within NOx and CO emission limits for the same combustion process. The simulated combustion process is based on a commercial software package CHEMKIN, where 78 chemical species and 467 chemical reactions related to the combustion mechanism are incorporated and a plug-flow model and a load-correlated temperature distribution for the combustion tunnel of a boiler are used. 22 refs., 6 figs., 4 tabs.
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
A mesoscale numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element method for calculating stresses. The sintering behavior of a sample constrained by a rigid substrate ...
Efficient caching for constrained skyline queries
DEFF Research Database (Denmark)
Mortensen, Michael Lind; Chester, Sean; Assent, Ira;
2015-01-01
Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints ...
Bayesian evaluation of inequality constrained hypotheses
Gu, X.; Mulder, J.; Deković, M.; Hoijtink, H.
2014-01-01
Bayesian evaluation of inequality constrained hypotheses enables researchers to investigate their expectations with respect to the structure among model parameters. This article proposes an approximate Bayes procedure that can be used for the selection of the best of a set of inequality constrained
Determination of optimal gains for constrained controllers
Energy Technology Data Exchange (ETDEWEB)
Kwan, C.M.; Mestha, L.K.
1993-08-01
In this report, we consider the determination of optimal gains, with respect to a certain performance index, for state feedback controllers where some elements in the gain matrix are constrained to be zero. Two iterative schemes for systematically finding the constrained gain matrix are presented. An example is included to demonstrate the procedures.
Fabbrichesi, Marco
2015-01-01
We show how the Higgs boson mass is protected from the potentially large corrections due to the introduction of minimal dark matter if the new physics sector is made supersymmetric. The fermionic dark matter candidate (a 5-plet of $SU(2)_L$) is accompanied by a scalar state. The weak gauge sector is made supersymmetric and the Higgs boson is embedded in a supersymmetric multiplet. The remaining standard model states are non-supersymmetric. Non vanishing corrections to the Higgs boson mass only appear at three-loop level and the model is natural for dark matter masses up to 15 TeV--a value larger than the one required by the cosmological relic density. The construction presented stands as an example of a general approach to naturalness that solves the little hierarchy problem which arises when new physics is added beyond the standard model at an energy scale around 10 TeV.
Minimal Hepatic Encephalopathy
Directory of Open Access Journals (Sweden)
Laura M Stinton
2013-01-01
Full Text Available Minimal hepatic encephalopathy (MHE is the earliest form of hepatic encephalopathy and can affect up to 80% of cirrhotic patients. By definition, it has no obvious clinical manifestation and is characterized by neurocognitive impairment in attention, vigilance and integrative function. Although often not considered to be clinically relevant and, therefore, not diagnosed or treated, MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis of MHE has traditionally been achieved through neuropsychological examination, psychometric tests or the newer critical flicker frequency test. A new smartphone application (EncephalApp Stroop Test may serve to function as a screening tool for patients requiring further testing. In addition to physician reporting and driving restrictions, medical treatment for MHE includes non-absorbable disaccharides (eg, lactulose, probiotics or rifaximin. Liver transplantation may not result in reversal of the cognitive deficits associated with MHE.
Energy Technology Data Exchange (ETDEWEB)
Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica
2017-03-15
Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.
Resource Minimization Job Scheduling
Chuzhoy, Julia; Codenotti, Paolo
Given a set J of jobs, where each job j is associated with release date r j , deadline d j and processing time p j , our goal is to schedule all jobs using the minimum possible number of machines. Scheduling a job j requires selecting an interval of length p j between its release date and deadline, and assigning it to a machine, with the restriction that each machine executes at most one job at any given time. This is one of the basic settings in the resource-minimization job scheduling, and the classical randomized rounding technique of Raghavan and Thompson provides an O(logn/loglogn)-approximation for it. This result has been recently improved to an O(sqrt{log n})-approximation, and moreover an efficient algorithm for scheduling all jobs on O((OPT)^2) machines has been shown. We build on this prior work to obtain a constant factor approximation algorithm for the problem.
Directory of Open Access Journals (Sweden)
Angela F Gonzalez
2006-09-01
Full Text Available Folhas de rúcula produzidas em campo aberto e sob cultivo protegido com agrotêxtil foram minimamente processadas, embaladas inteiras ou picadas em bandejas de poliestireno expandido e cobertas com filme de PVC de 14 micras. O delineamento adotado foi o inteiramente casualizado em esquema fatorial 2x2x2 (ambiente de cultivo, forma de preparo e refrigeração a 0(0C e 10(0C, com quatro repetições por tratamento, totalizando 32 bandejas. Os tratamentos foram armazenados a 0ºC e 10ºC por 10 dias, quando foram avaliadas as variáveis perda de massa (%; pH; sólidos solúveis; acidez titulável; cor e aparência. A conservação a 0ºC promoveu uma diminuição da perda de peso da rúcula minimamente processada. A utilização de folhas inteiras ou minimamente processadas foi significativa para sólidos solúveis sendo os maiores valores encontrados para as folhas inteiras. Para folhas picadas observou-se valores de acidez significativamente maiores para as produzidas sob ambiente natural. Independente da forma de preparo, a rúcula produzida em ambiente natural apresentou menor valor de pH. A cor e aparência da rúcula não foram influenciadas pelos tratamentos.Leaves of rocket salad produced under open field and non woven polypropylene were minimally processed and packed entire or pricked in polyestyrene trays covered with PVC film of 14 micras. The treatments were stored at 0(0C and 10(0C per 10 days, when the variables weight loss (%; pH; soluble solids; titratable acidity; colour and appearance were evaluated. The conservation under 0(0C promoted a reduction of weight loss on rocket salad minimally processed. Using entire or minimally processed leaves were significant for soluble solids the biggest values being found for entire leaves. For pricked leaves bigger values of acidity were observed for the produced ones under natural environment. Independent of the preparation form rocket salad produced under natural environment presented minor
KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR
Directory of Open Access Journals (Sweden)
John A. Mercer
2005-06-01
Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor
Multiple objectives application approach to waste minimization
Institute of Scientific and Technical Information of China (English)
张清宇
2002-01-01
Besides economics and controllability, waste minimization has now become an obje ctive in designing chemical processes, and usually leads to high costs of invest ment and operation. An attempt was made to minimize waste discharged from chemic al reaction processes during the design and modification process while the opera tion conditions were also optimized to meet the requirements of technology and e conomics. Multiobjectives decision nonlinear programming (NLP) was employed to o ptimize the operation conditions of a chemical reaction process and reduce waste . A modeling language package-SPEEDUP was used to simulate the process. This p aper presents a case study of the benzene production process. The flowsheet factors affecting the economics and waste generation were examined. Constraints were imposed to reduce the number of objectives and carry out optimal calculations e asily. After comparisons of all possible solutions, best-compromise approach wa s applied to meet technological requirements and minimize waste.
Multiple objectives application approach to waste minimization
Institute of Scientific and Technical Information of China (English)
张清宇
2002-01-01
Besides econormics and controllability, waste minimization has now become an objective in designing chemical processes,and usually leads to high costs of investment and operation.An attempt was mede to minimize waste discharged from chemical reaction processes during the design and modification process while the operation conditions were also optimized to meet the requirements of technology and economics.Multiob-jectives decision nonlinear programming(NLP) was emplyed optimize the operation conditions of a chemical reaction process and reduce waste. A modeling package-SPEEDUP was used to simulate the process.This paper presents a case study of the benzenc production process.The flowsheer factors affecting the economics and waste generation were examined.Constraints were imposed to reduce the number of objectives and carry out optimal calculations easily.After comparisons of all possiblle solutions,best-compromise approach was applied to meet technological requirements and minimize waste.
Reger, Darren; Madanat, Samer; Horvath, Arpad
2015-11-01
Transportation agencies are being urged to reduce their greenhouse gas (GHG) emissions. One possible solution within their scope is to alter their pavement management system to include environmental impacts. Managing pavement assets is important because poor road conditions lead to increased fuel consumption of vehicles. Rehabilitation activities improve pavement condition, but require materials and construction equipment, which produce GHG emissions as well. The agency’s role is to decide when to rehabilitate the road segments in the network. In previous work, we sought to minimize total societal costs (user and agency costs combined) subject to an emissions constraint for a road network, and demonstrated that there exists a range of potentially optimal solutions (a Pareto frontier) with tradeoffs between costs and GHG emissions. However, we did not account for the case where the available financial budget to the agency is binding. This letter considers an agency whose main goal is to reduce its carbon footprint while operating under a constrained financial budget. A Lagrangian dual solution methodology is applied, which selects the optimal timing and optimal action from a set of alternatives for each segment. This formulation quantifies GHG emission savings per additional dollar of agency budget spent, which can be used in a cap-and-trade system or to make budget decisions. We discuss the importance of communication between agencies and their legislature that sets the financial budgets to implement sustainable policies. We show that for a case study of Californian roads, it is optimal to apply frequent, thin overlays as opposed to the less frequent, thick overlays recommended in the literature if the objective is to minimize GHG emissions. A promising new technology, warm-mix asphalt, will have a negligible effect on reducing GHG emissions for road resurfacing under constrained budgets.
DAE for Frictional Contact Modeling of Constrained Multi-Flexible Body Systems
Institute of Scientific and Technical Information of China (English)
Ray P.S.Han; S. G. Mao
2004-01-01
A general formulation for modeling frictional contact interactions in a constrained multi-flexible body system is outlined in this paper. The governing differential-algebraic equations (DAE) for the constrained motion contains not only a frictional term but also, the unknown contact conditions. These contact conditions are characterized by a set of nonlinear complementarity equations. To demonstrate the model, a falling-spinning beam impacting a rough elastic ground with damping is solved and comparison with Stewart-Trinkles' results provided.
Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation
Cordero-Carrión, Isabel; Ibáñez, José María
2010-01-01
This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.
Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation
Energy Technology Data Exchange (ETDEWEB)
Cordero-Carrion, Isabel; Ibanez, Jose MarIa [Departamento de Astronomia y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain); Cerda-Duran, Pablo, E-mail: isabel.cordero@uv.e, E-mail: cerda@mpa-garching.mpg.d, E-mail: jose.m.ibanez@uv.e [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Strasse 1, D-85741 Garching (Germany)
2010-05-01
This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.
Integrating factors and conservation theorems of constrained Birkhoffian systems
Institute of Scientific and Technical Information of China (English)
Qiao Yong-Fen; Zhao Shu-Hong; Li Ren-Jie
2006-01-01
In this paper the conservation theorems of the constrained Birkhoffian systems are studied by using the method of integrating factors. The differential equations of motion of the system are written. The definition of integrating factors is given for the system. The necessary conditions for the existence of the conserved quantity for the system are studied.The conservation theorem and its inverse for the system are established. Finally, an example is given to illustrate the application of the results.
Canonical quantization of constrained systems
Energy Technology Data Exchange (ETDEWEB)
Bouzas, A.; Epele, L.N.; Fanchiotti, H.; Canal, C.A.G. (Laboratorio de Fisica Teorica, Departamento de Fisica, Universidad Nacional de La Plata, Casilla de Correo No. 67, 1900 La Plata, Argentina (AR))
1990-07-01
The consideration of first-class constraints together with gauge conditions as a set of second-class constraints in a given system is shown to be incorrect when carrying out its canonical quantization.
[Bilateral dependency and the minimal group paradigm].
Jin, N; Yamagishi, T; Kiyonari, T
1996-06-01
Two experiments examined the effect of illusion of control on in-group favoritism found in the minimal group situation (Tajfel, Billig, Bundy, & Flament, 1971). In bilateral dependency condition, each member made allocation decisions for in-group as well as out-group participants. It was exactly the same situation used in the original studies under the minimal group paradigm, and the subjects knew that their reward allocation too depended on others' decisions. In contrast, in unilateral dependency condition, the subjects made allocation decisions knowing that theirs were not dependent on others' decisions. In Experiment 1, an in-group bias in reward distribution was found in the bilateral dependency condition, but not in the unilateral condition. In Experiment 2, it was found that only those who felt illusion of control exhibited such an in-group bias. Results of the experiments therefore confirmed that illusion of control explained in-group favoritism, as Karp, Jin, Yamagishi, and Shinotsuka (1993) originally hypothesized.
Minimal Mimicry: Mere Effector Matching Induces Preference
Sparenberg, Peggy; Topolinski, Sascha; Springer, Anne; Prinz, Wolfgang
2012-01-01
Both mimicking and being mimicked induces preference for a target. The present experiments investigate the minimal sufficient conditions for this mimicry-preference link to occur. We argue that mere effector matching between one's own and the other person's movement is sufficient to induce preference, independent of which movement is actually…
Design and Demonstration of Minimal Lunar Base
Boche-Sauvan, L.; Foing, B. H.; Exohab Team
2009-04-01
Introduction: We propose a conceptual analysis of a first minimal lunar base, in focussing on the system aspects and coordinating every different part as part an evolving architecture [1-3]. We justify the case for a scientific outpost allowing experiments, sample analysis in laboratory (relevant to the origin and evolution of the Earth, geophysical and geochemical studies of the Moon, life sciences, observation from the Moon). Research: Research activities will be conducted with this first settlement in: - science (of, from and on the Moon) - exploration (robotic mobility, rover, drilling), - technology (communication, command, organisation, automatism). Life sciences. The life sciences aspects are considered through a life support for a crew of 4 (habitat) and a laboratory activity with biological experiments performed on Earth or LEO, but then without any magnetosphere protection and therefore with direct cosmic rays and solar particle effects. Moreover, the ability of studying the lunar environment in the field will be a big asset before settling a permanent base [3-5]. Lunar environment. The lunar environment adds constraints to instruments specifications (vacuum, extreme temperature, regolith, seism, micrometeorites). SMART-1 and other missions data will bring geometrical, chemical and physical details about the environment (soil material characteristics, on surface conditions …). Test bench. To assess planetary technologies and operations preparing for Mars human exploration. Lunar outpost predesign modular concept: To allow a human presence on the moon and to carry out these experiments, we will give a pre-design of a human minimal lunar base. Through a modular concept, this base will be possibly evolved into a long duration or permanent base. We will analyse the possibilities of settling such a minimal base by means of the current and near term propulsion technology, as a full Ariane 5 ME carrying 1.7 T of gross payload to the surface of the Moon
Smoothing neural network for constrained non-Lipschitz optimization with applications.
Bian, Wei; Chen, Xiaojun
2012-03-01
In this paper, a smoothing neural network (SNN) is proposed for a class of constrained non-Lipschitz optimization problems, where the objective function is the sum of a nonsmooth, nonconvex function, and a non-Lipschitz function, and the feasible set is a closed convex subset of . Using the smoothing approximate techniques, the proposed neural network is modeled by a differential equation, which can be implemented easily. Under the level bounded condition on the objective function in the feasible set, we prove the global existence and uniform boundedness of the solutions of the SNN with any initial point in the feasible set. The uniqueness of the solution of the SNN is provided under the Lipschitz property of smoothing functions. We show that any accumulation point of the solutions of the SNN is a stationary point of the optimization problem. Numerical results including image restoration, blind source separation, variable selection, and minimizing condition number are presented to illustrate the theoretical results and show the efficiency of the SNN. Comparisons with some existing algorithms show the advantages of the SNN.
Optimal performance of constrained control systems
Harvey, P. Scott, Jr.; Gavin, Henri P.; Scruggs, Jeffrey T.
2012-08-01
This paper presents a method to compute optimal open-loop trajectories for systems subject to state and control inequality constraints in which the cost function is quadratic and the state dynamics are linear. For the case in which inequality constraints are decentralized with respect to the controls, optimal Lagrange multipliers enforcing the inequality constraints may be found at any time through Pontryagin’s minimum principle. In so doing, the set of differential algebraic Euler-Lagrange equations is transformed into a nonlinear two-point boundary-value problem for states and costates whose solution meets the necessary conditions for optimality. The optimal performance of inequality constrained control systems is calculable, allowing for comparison to previous, sub-optimal solutions. The method is applied to the control of damping forces in a vibration isolation system subjected to constraints imposed by the physical implementation of a particular controllable damper. An outcome of this study is the best performance achievable given a particular objective, isolation system, and semi-active damper constraints.
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Giribet, Gaston
2014-01-01
Minimal Massive Gravity (MMG) is an extension of three-dimensional Topologically Massive Gravity that, when formulated about Anti-de Sitter space, accomplishes to solve the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the non-linear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Banados-Teitelboim-Zanelli (BTZ) black holes. These geometries behave asymptotically as solutions of the so-called Log Gravity, and, despite the weakened falling-off close to the boundary, they have finite mass and finite angular momentum, which w...
Directory of Open Access Journals (Sweden)
Oda Kin-ya
2013-05-01
Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.
Minimal distances between SCFTs
Energy Technology Data Exchange (ETDEWEB)
Buican, Matthew [Department of Physics and Astronomy, Rutgers University,Piscataway, NJ 08854 (United States)
2014-01-28
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break N=2→N=1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.
Shantha Kumara, H M C; Cabot, J C; Hoffman, A; Luchtefeld, M; Kalady, M F; Hyman, N; Feingold, D; Baxter, R; Whelan, R L
2010-02-01
Plasma VEGF levels increase after minimally invasive colorectal resection (MICR) and remain elevated for 2-4 weeks. VEGF induces physiologic and pathologic angiogenesis by binding to endothelial cell (EC) bound VEGF-Receptor-1 (VEGFR1) and VEGFR2. Soluble forms of these receptors sequester plasma VEGF, decreasing the amount available to bind to EC-bound receptors. Ramifications of surgery-related plasma VEGF changes partially depend on plasma levels of sVEGFR1 and sVEGFR2. This study assessed perioperative sVEGFR1 and sVEGFR2 levels after MICR in patients with colorectal cancer. Forty-five patients were studied; blood samples were taken from all patients preoperatively (preop) and on postoperative days (POD) 1 and 3; in most a fourth sample was drawn between POD 7-30. Late samples were bundled into two time points: POD 7-13 and POD 14-30. sVEGFR1 and sVEGFR2 levels were measured via ELISA. sVEGFR2 data are reported as mean +/- SD and were assessed with the paired samples t test. sVEGFR1 data were not normally distributed. They are reported as median and 95% confidence interval (CI) and were assessed with the Wilcoxon signed-Rank test (p MICR; sVEGFR2 changes dominate due to their much larger magnitude. The net result is less plasma VEGF bound by soluble receptors and more plasma VEGF available to bind to ECs early after surgery.
Diabatic constrained relativistic mean field approach
L"u, H F; Meng, J
2005-01-01
A diabatic (configuration-fixed) constrained approach to calculate the potential energy surface (PES) of the nucleus is developed in the relativistic mean field model. The potential energy surfaces of $^{208}$Pb obtained from both adiabatic and diabatic constrained approaches are investigated and compared. The diabatic constrained approach enables one to decompose the segmented PES obtained in usual adiabatic approaches into separate parts uniquely characterized by different configurations, to define the single particle orbits at very deformed region by their quantum numbers, and to obtain several well defined deformed excited states which can hardly be expected from the adiabatic PES's.
A Dynamic Programming Approach to Constrained Portfolios
DEFF Research Database (Denmark)
Kraft, Holger; Steffensen, Mogens
2013-01-01
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...... the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions...
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.
to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number......A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
2014-01-01
as well as the FEM calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of a sample constrained by a rigid substrate is simulated. The constrained sintering result in a larger number of pores near the substrate, as well as anisotropic sintering shrinkage......A numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...
Modeling the Microstructural Evolution During Constrained Sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
2015-01-01
as well as the FEM calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of a sample constrained by a rigid substrate is simulated. The constrained sintering results in a larger number of pores near the substrate, as well as anisotropic sintering shrinkage......A numerical model able to simulate solid-state constrained sintering is presented. The model couples an existing kinetic Monte Carlo model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...
Constrained adaptive lifting and the CAL4 metric for helicopter transmission diagnostics
Samuel, Paul D.; Pines, Darryll J.
2009-01-01
This paper presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting (CAL) algorithm, an adaptive manifestation of the lifting scheme. Lifting is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Adaptivity is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation synchronous signal-averaging algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have approximately the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local waveform changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. A diagnostic metric based on the lifting prediction error vector termed CAL4 is developed. The CAL diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the CAL4 metric is compared with the classic metric FM4.
A multi-objective dynamic programming approach to constrained discrete-time optimal control
Energy Technology Data Exchange (ETDEWEB)
Driessen, B.J.; Kwok, K.S.
1997-09-01
This work presents a multi-objective differential dynamic programming approach to constrained discrete-time optimal control. In the backward sweep of the dynamic programming in the quadratic sub problem, the sub problem input at a stage or time step is solved for in terms of the sub problem state entering that stage so as to minimize the summed immediate and future cost subject to minimizing the summed immediate and future constraint violations, for all such entering states. The method differs from previous dynamic programming methods, which used penalty methods, in that the constraints of the sub problem, which may include terminal constraints and path constraints, are solved exactly if they are solvable; otherwise, their total violation is minimized. Again, the resulting solution of the sub problem is an input history that minimizes the quadratic cost function subject to being a minimizer of the total constraint violation. The expected quadratic convergence of the proposed algorithm is demonstrated on a numerical example.
Gamma ray tests of Minimal Dark Matter
Energy Technology Data Exchange (ETDEWEB)
Cirelli, Marco [Institut de Physique Théorique, Université Paris Saclay, CNRS, CEA, Orme des Merisiers, F-91191 Gif-sur-Yvette (France); Hambye, Thomas [Service de Physique Theórique, Université Libre de Bruxelles, Boulevard du Triomphe, CP225, 1050 Brussels (Belgium); Panci, Paolo [Institut d’Astrophysique de Paris, UMR 7095 CNRS, Université Pierre et Marie Curie, 98 bis Boulevard Arago, Paris 75014 (France); Sala, Filippo; Taoso, Marco [Institut de Physique Théorique, Université Paris Saclay, CNRS, CEA, Orme des Merisiers, F-91191 Gif-sur-Yvette (France)
2015-10-12
We reconsider the model of Minimal Dark Matter (a fermionic, hypercharge-less quintuplet of the EW interactions) and compute its gamma ray signatures. We compare them with a number of gamma ray probes: the galactic halo diffuse measurements, the galactic center line searches and recent dwarf galaxies observations. We find that the original minimal model, whose mass is fixed at 9.4 TeV by the relic abundance requirement, is constrained by the line searches from the Galactic Center: it is ruled out if the Milky Way possesses a cuspy profile such as NFW but it is still allowed if it has a cored one. Observations of dwarf spheroidal galaxies are also relevant (in particular searches for lines), and ongoing astrophysical progresses on these systems have the potential to eventually rule out the model. We also explore a wider mass range, which applies to the case in which the relic abundance requirement is relaxed. Most of our results can be safely extended to the larger class of multi-TeV WIMP DM annihilating into massive gauge bosons.
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich
2015-10-01
Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.
Minimal Superstrings and Loop Gas Models
Gaiotto, D; Takayanagi, T; Gaiotto, Davide; Rastelli, Leonardo; Takayanagi, Tadashi
2005-01-01
We reformulate the matrix models of minimal superstrings as loop gas models on random surfaces. In the continuum limit, this leads to the identification of minimal superstrings with certain bosonic string theories, to all orders in the genus expansion. RR vertex operators arise as operators in a Z_2 twisted sector of the matter CFT. We show how the loop gas model implements the sum over spin structures expected from the continuum RNS formulation. Open string boundary conditions are also more transparent in this language.
Coding for Two Dimensional Constrained Fields
DEFF Research Database (Denmark)
Laursen, Torben Vaarbye
2006-01-01
for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower....... The important concept of entropy is introduced. In general, the entropy of a constrained field is not readily computable, but we give a series of upper and lower bounds based on one dimensional techniques. We discuss the use of a Pickard probability model for constrained fields. The novelty lies in using...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....
Constrained crosstalk resistant adaptive noise canceller
Parsa, V.; Parker, P.
1994-08-01
The performance of an adaptive noise canceller (ANC) is sensitive to the presence of signal `crosstalk' in the reference channel. The authors propose a novel approach to crosstalk resistant adaptive noise cancellation, namely the constrained crosstalk resistant adaptive noise canceller (CCRANC). The theoretical analysis of the CCRANC along with the constrained algorithm is presented. The performance of the CCRANC in recovering somatosensory evoked potentials (SEPs) from myoelectric interference is then evaluated through simulations.
CANONICAL FORMULATION OF NONHOLONOMIC CONSTRAINED SYSTEMS
Institute of Scientific and Technical Information of China (English)
GUO YONG-XIN; YU YING; HUANG HAI-JUN
2001-01-01
Based on the Ehresmann connection theory and symplectic geometry, the canonical formulation of nonholonomic constrained mechanical systems is described. Following the Lagrangian formulation of the constrained system, the Hamiltonian formulation is given by Legendre transformation. The Poisson bracket defined by an anti-symmetric tensor does not satisfy the Jacobi identity for the nonintegrability of nonholonomic constraints. The constraint manifold can admit symplectic submanifold for some cases, in which the Lie algebraic structure exists.
On the origin of constrained superfields
Energy Technology Data Exchange (ETDEWEB)
Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-05-06
In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.
A second-generation constrained reaction volume shock tube.
Campbell, M F; Tulgestke, A M; Davidson, D F; Hanson, R K
2014-05-01
We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.
A second-generation constrained reaction volume shock tube
Campbell, M. F.; Tulgestke, A. M.; Davidson, D. F.; Hanson, R. K.
2014-05-01
We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.
Bayesian methods for the analysis of inequality constrained contingency tables.
Laudy, Olav; Hoijtink, Herbert
2007-04-01
A Bayesian methodology for the analysis of inequality constrained models for contingency tables is presented. The problem of interest lies in obtaining the estimates of functions of cell probabilities subject to inequality constraints, testing hypotheses and selection of the best model. Constraints on conditional cell probabilities and on local, global, continuation and cumulative odds ratios are discussed. A Gibbs sampler to obtain a discrete representation of the posterior distribution of the inequality constrained parameters is used. Using this discrete representation, the credibility regions of functions of cell probabilities can be constructed. Posterior model probabilities are used for model selection and hypotheses are tested using posterior predictive checks. The Bayesian methodology proposed is illustrated in two examples.
Application of constrained aza-valine analogs for Smac mimicry.
Chingle, Ramesh; Ratni, Sara; Claing, Audrey; Lubell, William D
2016-05-01
Constrained azapeptides were designed based on the Ala-Val-Pro-Ile sequence from the second mitochondria-derived activator of caspases (Smac) protein and tested for ability to induce apoptosis in cancer cells. Diels-Alder cyclizations and Alder-ene reactions on azopeptides enabled construction of a set of constrained aza-valine dipeptide building blocks, that were introduced into mimics using effective coupling conditions to acylate bulky semicarbazide residues. Evaluation of azapeptides 7-11 in MCF-7 breast cancer cells indicated aza-cyclohexanylglycyine analog 11 induced cell death more efficiently than the parent tetrapeptide likely by a caspase-9 mediated apoptotic pathway. © 2016 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 106: 235-244, 2016.
A Projection Neural Network for Constrained Quadratic Minimax Optimization.
Liu, Qingshan; Wang, Jun
2015-11-01
This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.
A Trust Region Method with a Conic Model for Nonlinearly Constrained Optimization%解非线性优化问题的锥模型信赖域方法
Institute of Scientific and Technical Information of China (English)
王承竞
2006-01-01
Trust region methods are powerful and effective optimization methods. The conic model method is a new type of method with more information available at each iteration than standard quadratic-based methods. The advantages of the above two methods can be combined to form a more powerful method for constrained optimization. The trust region subproblem of our method is to minimize a conic function subject to the linearized constraints and trust region bound. At the same time, the new algorithm still possesses robust global properties. The global convergence of the new algorithm under standard conditions is established.
On stable compact minimal submanifolds
Torralbo, Francisco
2010-01-01
Stable compact minimal submanifolds of the product of a sphere and any Riemannian manifold are classified whenever the dimension of the sphere is at least three. The complete classification of the stable compact minimal submanifolds of the product of two spheres is obtained. Also, it is proved that the only stable compact minimal surfaces of the product of a 2-sphere and any Riemann surface are the complex ones.
Minimally invasive procedures on the lumbar spine
Skovrlj, Branko; Gilligan, Jeffrey; Cutler, Holt S; Qureshi, Sheeraz A
2015-01-01
Degenerative disease of the lumbar spine is a common and increasingly prevalent condition that is often implicated as the primary reason for chronic low back pain and the leading cause of disability in the western world. Surgical management of lumbar degenerative disease has historically been approached by way of open surgical procedures aimed at decompressing and/or stabilizing the lumbar spine. Advances in technology and surgical instrumentation have led to minimally invasive surgical techniques being developed and increasingly used in the treatment of lumbar degenerative disease. Compared to the traditional open spine surgery, minimally invasive techniques require smaller incisions and decrease approach-related morbidity by avoiding muscle crush injury by self-retaining retractors, preventing the disruption of tendon attachment sites of important muscles at the spinous processes, using known anatomic neurovascular and muscle planes, and minimizing collateral soft-tissue injury by limiting the width of the surgical corridor. The theoretical benefits of minimally invasive surgery over traditional open surgery include reduced blood loss, decreased postoperative pain and narcotics use, shorter hospital length of stay, faster recover and quicker return to work and normal activity. This paper describes the different minimally invasive techniques that are currently available for the treatment of degenerative disease of the lumbar spine. PMID:25610845
Global Analysis of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J
2010-01-01
Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ
Minimal surfaces for architectural constructions
Directory of Open Access Journals (Sweden)
Velimirović Ljubica S.
2008-01-01
Full Text Available Minimal surfaces are the surfaces of the smallest area spanned by a given boundary. The equivalent is the definition that it is the surface of vanishing mean curvature. Minimal surface theory is rapidly developed at recent time. Many new examples are constructed and old altered. Minimal area property makes this surface suitable for application in architecture. The main reasons for application are: weight and amount of material are reduced on minimum. Famous architects like Otto Frei created this new trend in architecture. In recent years it becomes possible to enlarge the family of minimal surfaces by constructing new surfaces.
On minimal artinian modules and minimal artinian linear groups
Directory of Open Access Journals (Sweden)
Leonid A. Kurdachenko
2001-01-01
minimal artinian linear groups. The authors prove that in such classes of groups as hypercentral groups (so also, nilpotent and abelian groups and FC-groups, minimal artinian linear groups have precisely the same structure as the corresponding irreducible linear groups.
Directory of Open Access Journals (Sweden)
Mahdi Sohrabi-Haghighat
2014-06-01
Full Text Available In this paper, a new algorithm based on SQP method is presented to solve the nonlinear inequality constrained optimization problem. As compared with the other existing SQP methods, per single iteration, the basic feasible descent direction is computed by solving at most two equality constrained quadratic programming. Furthermore, there is no need for any auxiliary problem to obtain the coefficients and update the parameters. Under some suitable conditions, the global and superlinear convergence are shown. Keywords: Global convergence, Inequality constrained optimization, Nonlinear programming problem, SQP method, Superlinear convergence rate.
The combination of transformed and constrained Gibbs energies.
Blomberg, Peter B A; Koukkari, Pertti S
2009-08-01
Gibbs free energy is the thermodynamic potential representing the fundamental equation at constant temperature, pressure, and molar amounts. Transformed Gibbs energies are important for biochemical systems because the local concentrations within cell compartments cannot yet be determined accurately. The method of Constrained Gibbs Energies adds kinetic reaction extent limitations to the internal constraints of the system thus extending the range of applicability of equilibrium thermodynamics from predefined constraints to dynamic constraints, e.g., adding time-dependent constraints of irreversible chemical change. In this article, the implementation and use of Transformed Gibbs Energies in the Gibbs energy minimization framework is demonstrated with educational examples. The combined method has the advantage of being able to calculate transient thermodynamic properties during dynamic simulation.
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... a new random pattern generator which copes with strict practical limitations imposed on patterns, with possibly minimal loss in randomness of sampling. The proposed generator is compared with existing sampling pattern generators using the introduced statistical methods. It is shown that the proposed......Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...
Constraining projections of summer Arctic sea ice
Directory of Open Access Journals (Sweden)
F. Massonnet
2012-11-01
Full Text Available We examine the recent (1979–2010 and future (2011–2100 characteristics of the summer Arctic sea ice cover as simulated by 29 Earth system and general circulation models from the Coupled Model Intercomparison Project, phase 5 (CMIP5. As was the case with CMIP3, a large intermodel spread persists in the simulated summer sea ice losses over the 21st century for a given forcing scenario. The 1979–2010 sea ice extent, thickness distribution and volume characteristics of each CMIP5 model are discussed as potential constraints on the September sea ice extent (SSIE projections. Our results suggest first that the future changes in SSIE with respect to the 1979–2010 model SSIE are related in a complicated manner to the initial 1979–2010 sea ice model characteristics, due to the large diversity of the CMIP5 population: at a given time, some models are in an ice-free state while others are still on the track of ice loss. However, in phase plane plots (that do not consider the time as an independent variable, we show that the transition towards ice-free conditions is actually occurring in a very similar manner for all models. We also find that the year at which SSIE drops below a certain threshold is likely to be constrained by the present-day sea ice properties. In a second step, using several adequate 1979–2010 sea ice metrics, we effectively reduce the uncertainty as to when the Arctic could become nearly ice-free in summertime, the interval [2041, 2060] being our best estimate for a high climate forcing scenario.
Minimal autocatalytic networks.
Steel, Mike; Hordijk, Wim; Smith, Joshua
2013-09-07
Self-sustaining autocatalytic chemical networks represent a necessary, though not sufficient condition for the emergence of early living systems. These networks have been formalised and investigated within the framework of RAF theory, which has led to a number of insights and results concerning the likelihood of such networks forming. In this paper, we extend this analysis by focussing on how small autocatalytic networks are likely to be when they first emerge. First we show that simulations are unlikely to settle this question, by establishing that the problem of finding a smallest RAF within a catalytic reaction system is NP-hard. However, irreducible RAFs (irrRAFs) can be constructed in polynomial time, and we show it is possible to determine in polynomial time whether a bounded size set of these irrRAFs contain the smallest RAFs within a system. Moreover, we derive rigorous bounds on the sizes of small RAFs and use simulations to sample irrRAFs under the binary polymer model. We then apply mathematical arguments to prove a new result suggested by those simulations: at the transition catalysis level at which RAFs first form in this model, small RAFs are unlikely to be present. We also investigate further the relationship between RAFs and another formal approach to self-sustaining and closed chemical networks, namely chemical organisation theory (COT).
A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas
Directory of Open Access Journals (Sweden)
Gaofei Yin
2016-08-01
Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.
Finite-dimensional constrained fuzzy control for a class of nonlinear distributed process systems.
Wu, Huai-Ning; Li, Han-Xiong
2007-10-01
This correspondence studies the problem of finite-dimensional constrained fuzzy control for a class of systems described by nonlinear parabolic partial differential equations (PDEs). Initially, Galerkin's method is applied to the PDE system to derive a nonlinear ordinary differential equation (ODE) system that accurately describes the dynamics of the dominant (slow) modes of the PDE system. Subsequently, a systematic modeling procedure is given to construct exactly a Takagi-Sugeno (T-S) fuzzy model for the finite-dimensional ODE system under state constraints. Then, based on the T-S fuzzy model, a sufficient condition for the existence of a stabilizing fuzzy controller is derived, which guarantees that the state constraints are satisfied and provides an upper bound on the quadratic performance function for the finite-dimensional slow system. The resulting fuzzy controllers can also guarantee the exponential stability of the closed-loop PDE system. Moreover, a local optimization algorithm based on the linear matrix inequalities is proposed to compute the feedback gain matrices of a suboptimal fuzzy controller in the sense of minimizing the quadratic performance bound. Finally, the proposed design method is applied to the control of the temperature profile of a catalytic rod.
Directory of Open Access Journals (Sweden)
Pérez Laura V.
2016-03-01
Full Text Available In the optimization of power management of hybrid electric vehicles, the equivalent consumption factor is often used. This parameter represents a way of penalizing the use of power from the batteries, taking into account the fuel consumption that such use eventually hides. If the problem of determining the power split between the energy sources of the vehicle that minimizes fuel consumption is stated as a non linear constrained optimal control problem, and is solved using Pontryagin Maximum Principle (PMP, the equivalent consumption factor may be computed from the adjoint state. Following this approach we compute the trajectory of the adjoint state in the case where state constraints are taken into account. The optimality conditions from PMP are a Boundary Value Problem (BVP, which is solved numerically using a code named PASVA4. Numerical examples are compared with dynamic programming solutions of the same problem. It is found that the adjoint state is continuous and its trajectory is described. The approach may be generalized to similar optimal control problems.
Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K.
2014-01-01
We present an alternative to topologically massive gravity (TMG) with the same 'minimal' bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new 'minimal massive gravity'
Uniqueness of PL Minimal Surfaces
Institute of Scientific and Technical Information of China (English)
Yi NI
2007-01-01
Using a standard fact in hyperbolic geometry, we give a simple proof of the uniqueness of PL minimal surfaces, thus filling in a gap in the original proof of Jaco and Rubinstein. Moreover, in order to clarify some ambiguity, we sharpen the definition of PL minimal surfaces, and prove a technical lemma on the Plateau problem in the hyperbolic space.
Guidelines for mixed waste minimization
Energy Technology Data Exchange (ETDEWEB)
Owens, C.
1992-02-01
Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization.
Influenza SIRS with Minimal Pneumonitis.
Erramilli, Shruti; Mannam, Praveen; Manthous, Constantine A
2016-01-01
Although systemic inflammatory response syndrome (SIRS) is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca, and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.
Directory of Open Access Journals (Sweden)
Knol Dirk L
2006-08-01
Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.
Outcome and complications of constrained acetabular components.
Yang, Cao; Goodman, Stuart B
2009-02-01
Constrained acetabular liners were developed for the surgical treatment of recurrent instability by holding the femoral head captive within the socket. This article summarizes the data describing constrained component designs, indications, outcome, and complications. Different designs accept head sizes of varying diameter and have differing amounts of rim elevation and offset, allowing slight variations in the range of movement allowed. Complications of constrained acetabular components can be divided into three categories. The first category is directly related to the constraining mechanism such as dislocation, head dissociation from the stem, liner dissociation from the acetabular device, and impingement with or without locking ring breakage. The second category is related to increased constraint such as aseptic component loosening and osteolysis and periprosthetic fracture. The third category includes those cases not associated with increased constraint such as infection, deep vein thrombosis, and periprosthetic fracture. This device is effective at achieving hip stability, but the complications related to the constraining mechanism and increased constraint are of concern. These devices should be used as a salvage measure for the treatment of severe instability.
Minimal Webs in Riemannian Manifolds
DEFF Research Database (Denmark)
Markvorsen, Steen
2008-01-01
are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... theorems together with the comparison techniques for distance functions in Riemannian geometry and obtain bounds for the first Dirichlet eigenvalues, the exit times and the capacities as well as isoperimetric type inequalities for so-called extrinsic $R-$webs of minimal webs in ambient Riemannian manifolds...
Waste minimization handbook, Volume 1
Energy Technology Data Exchange (ETDEWEB)
Boing, L.E.; Coffey, M.J.
1995-12-01
This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.
Theories of minimalism in architecture: When prologue becomes palimpsest
Directory of Open Access Journals (Sweden)
Stevanović Vladimir
2014-01-01
Full Text Available This paper examines the modus and conditions of constituting and establishing architectural discourse on minimalism. One of the key topics in this discourse are historical line of development and the analysis of theoretical influences, which comprise connections of recent minimalism with the theorizations of various minimal, architectural and artistic, forms and concepts from the past. The paper shall particularly discuss those theoretical relations which, in a unitary way, link minimalism in architecture with its artistic nominal counterpart - minimal art. These are the relations founded on the basis of interpretative models on self-referentiality, phenomenological experience and contextualism, which are superficialy observed, common to both, artistic and architectural, minimalist discourses. It seems that in this constellation certain relations on the historical line of minimalism in architecture are questionable, while some other are overlooked. Precisely, posmodern fundamentalism is the architectural direction: 1 in which these three interpretations also existed; 2 from which architectural theorists retroactively appropriated many architects proclaiming them minimalists; 3 which establish identical relations with modern and postmodern theoretical and socio-historical contexts, as well as it will be done in minimalism. In spite of this, theoretical field of postmodern fundamentalism is surprisingly neglected in the discourse of minimalism in architecture. Instead of understanding postmodern fundamentalism as a kind of prologue to minimalism in architecture, it becomes an erased palimpsest over whom the different history of minimalism is rewriting, the history in which minimal art which occupies a central place.
A Constrained CA Model for Planning Simulation Incorporating Institutional Constraints
Institute of Scientific and Technical Information of China (English)
2010-01-01
In recent years,it is prevailing to simulate urban growth by means of cellular automata (CA in short) modeling,which is based on selforganizing theories and different from the system dynamic modeling.Since the urban system is definitely complex,the CA models applied in urban growth simulation should take into consideration not only the neighborhood influence,but also other factors influencing urban development.We bring forward the term of complex constrained CA (CC-CA in short) model,which integrates the constrained conditions of neighborhood,macro socio-economy,space and institution.Particularly,the constrained construction zoning,as one institutional constraint,is considered in the CC-CA modeling.In the paper,the conceptual CC-CA model is introduced together with the transition rules.Based on the CC-CA model for Beijing,we discuss the complex constraints to the urban development of,and we show how to set institutional constraints in planning scenario to control the urban growth pattern of Beijing.
Performance enhancement for GPS positioning using constrained Kalman filtering
Guo, Fei; Zhang, Xiaohong; Wang, Fuhong
2015-08-01
Over the past decades Kalman filtering (KF) algorithms have been extensively investigated and applied in the area of kinematic positioning. In the application of KF in kinematic precise point positioning (PPP), it is often the case where some known functional or theoretical relations exist among the unknown state parameters, which can be and should be made use of to enhance the performance of kinematic PPP, especially in an urban and forest environment. The central task of this paper is to effectively blend the commonly used GNSS data and internal/external additional constrained information to generate an optimal PPP solution. This paper first investigates the basic algorithm of constrained Kalman filtering. Then two types of PPP model with speed constraints and trajectory constraints, respectively, are proposed. Further validation tests based on a variety of situations show that the positioning performances (positioning accuracy, reliability and continuity) from the constrained Kalman filter are significantly superior to those from the conventional Kalman filter, particularly under extremely poor observation conditions.
Minimization and error estimates for a class of the nonlinear Schrodinger eigenvalue problems
Institute of Scientific and Technical Information of China (English)
MurongJIANG; JiachangSUN
2000-01-01
It is shown that the nonlinear eigenvaiue problem can be transformed into a constrained functional problem. The corresponding minimal function is a weak solution of this nonlinear problem. In this paper, one type of the energy functional for a class of the nonlinear SchrSdinger eigenvalue problems is proposed, the existence of the minimizing solution is proved and the error estimate is given out.
Location Based Throughput Maximization Routing in Energy Constrained Mobile Ad-hoc Network
Directory of Open Access Journals (Sweden)
V. Sumathy
2006-01-01
Full Text Available In wireless Ad-hoc network, power consumption becomes an important issue due to limited battery power. One of the reasons for energy expenditure in this network is irregularly distributed node pattern, which impose large interference range in certain area. To maximize the lifetime of ad-hoc mobile network, the power consumption rate of each node must be evenly distributed and the over all transmission range of each node must be minimized. Our protocol, Location based throughput maximization routing in energy constrained Ad-hoc network finds routing paths, which maximize the lifetime of individual nodes and minimize the total transmission energy consumption. The life of the entire network is increased and the network throughput is also increased. The reliability of the path is also increased. Location based energy constrained routing finds the distance between the nodes. Based on the distance the transmission power required is calculated and dynamically reduces the total transmission energy.
A Constrained-Gradient Method to Control Divergence Errors in Numerical MHD
Hopkins, Philip F
2015-01-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining zero magnetic field-divergence (div-B). Constrained transport (CT) schemes can achieve this at high accuracy, but have generally been restricted to very specific methods. For more general (meshless, moving-mesh, or ALE) methods, 'divergence-cleaning' schemes reduce the div-B errors, however they can still be significant, especially at discontinuities, and can lead to systematic deviations from correct solutions which converge away very slowly. Here we propose a new constrained gradient (CG) scheme which augments these with a hybrid projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. We emphasize that, unlike 'locally divergence free' methods, this actually minimizes the numerically unstable div-B terms, without affecting the convergence order of the method. We implement this in the mesh-free co...
Endoscopic Cystogastrostomy: Minimally Invasive Approach for Pancreatic Pseudocyst
Directory of Open Access Journals (Sweden)
Gull-Zareen Khan Sial
2014-12-01
Full Text Available Pancreatic pseudocysts in children are not uncommon. Non-resolving pseudocysts often require surgical intervention. Endoscopic cystogastrostomy is a minimally invasive procedure which is recommended for this condition. We report a large pancreatic pseudocyst in a 4-year old child, which developed following therapy with PEG-Asparaginase for acute lymphoblastic leukemia. It was managed with minimally invasive procedure.
Endoscopic cystogastrostomy: minimally invasive approach for pancreatic pseudocyst.
Sial, Gull-Zareen Khan; Qazi, Abid Quddus; Yusuf, Mohammed Aasim
2015-01-01
Pancreatic pseudocysts in children are not uncommon. Non-resolving pseudocysts often require surgical intervention. Endoscopic cystogastrostomy is a minimally invasive procedure which is recommended for this condition. We report a large pancreatic pseudocyst in a 4-year old child, which developed following therapy with PEG-Asparaginase for acute lymphoblastic leukemia. It was managed with minimally invasive procedure.
Minimal solution of general dual fuzzy linear systems
Energy Technology Data Exchange (ETDEWEB)
Abbasbandy, S. [Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran 14778 (Iran, Islamic Republic of); Department of Mathematics, Faculty of Science, Imam Khomeini International University, Qazvin 34194-288 (Iran, Islamic Republic of)], E-mail: abbasbandy@yahoo.com; Otadi, M.; Mosleh, M. [Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran 14778 (Iran, Islamic Republic of); Department of Mathematics, Islamic Azad University, Firuozkooh Branch, Firuozkooh (Iran, Islamic Republic of)
2008-08-15
Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered.
Towards weakly constrained double field theory
Directory of Open Access Journals (Sweden)
Kanghoon Lee
2016-08-01
Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Continuation of Sets of Constrained Orbit Segments
DEFF Research Database (Denmark)
Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki
Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajecto...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem.......Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...
Towards weakly constrained double field theory
Lee, Kanghoon
2016-08-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Continuation of Sets of Constrained Orbit Segments
DEFF Research Database (Denmark)
Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki;
Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....
Towards Weakly Constrained Double Field Theory
Lee, Kanghoon
2015-01-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X- ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Bayesian evaluation of inequality constrained hypotheses.
Gu, Xin; Mulder, Joris; Deković, Maja; Hoijtink, Herbert
2014-12-01
Bayesian evaluation of inequality constrained hypotheses enables researchers to investigate their expectations with respect to the structure among model parameters. This article proposes an approximate Bayes procedure that can be used for the selection of the best of a set of inequality constrained hypotheses based on the Bayes factor in a very general class of statistical models. The software package BIG is provided such that psychologists can use the approach proposed for the analysis of their own data. To illustrate the approximate Bayes procedure and the use of BIG, we evaluate inequality constrained hypotheses in a path model and a logistic regression model. Two simulation studies on the performance of our approximate Bayes procedure show that it results in accurate Bayes factors. PsycINFO Database Record (c) 2014 APA, all rights reserved
A TRUST-REGION ALGORITHM FOR NONLINEAR INEQUALITY CONSTRAINED OPTIMIZATION
Institute of Scientific and Technical Information of China (English)
Xiaojiao Tong; Shuzi Zhou
2003-01-01
This paper presents a new trust-region algorithm for n-dimension nonlinear optimization subject to m nonlinear inequality constraints. Equivalent KKT conditions are derived,which is the basis for constructing the new algorithm. Global convergence of the algorithm to a first-order KKT point is established under mild conditions on the trial steps, local quadratic convergence theorem is proved for nondegenerate minimizer point. Numerical experiment is presented to show the effectiveness of our approach.
Superspace geometry and the minimal, non minimal, and new minimal supergravity multiplets
Energy Technology Data Exchange (ETDEWEB)
Girardi, G.; Grimm, R.; Mueller, M.; Wess, J.
1984-11-01
We analyse superspace constraints in a systematic way and define a set of natural constraints. We give a complete solution of the Bianchi identities subject to these constraints and obtain a reducible, but not fully reducible multiplet. By additional constraints it can be reduced to either the minimal nonminimal or new minimal multiplet. We discuss the superspace actions for the various multiplets.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Constrained instanton and black hole creation
Institute of Scientific and Technical Information of China (English)
WU Zhongchao; XU Donghui
2004-01-01
A gravitational instanton is considered as the seed for the creation of a universe. However, there exist too few instantons. To include many interesting phenomena in the framework of quantum cosmology, the concept of constrained gravitational instanton is inevitable. In this paper we show how a primordial black hole is created from a constrained instanton. The quantum creation of a generic black hole in the closed or open background is completely resolved. The relation of the creation scenario with gravitational thermodynamics and topology is discussed.
Locally minimal topological groups 1
Chasco, María Jesús; Dikranjan, Dikran N.; Außenhofer, Lydia; Domínguez, Xabier
2015-01-01
The aim of this paper is to go deeper into the study of local minimality and its connection to some naturally related properties. A Hausdorff topological group ▫$(G,tau)$▫ is called locally minimal if there exists a neighborhood ▫$U$▫ of 0 in ▫$tau$▫ such that ▫$U$▫ fails to be a neighborhood of zero in any Hausdorff group topology on ▫$G$▫ which is strictly coarser than ▫$tau$▫. Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all mini...
Minimal flows and their extensions
Auslander, J
1988-01-01
This monograph presents developments in the abstract theory of topological dynamics, concentrating on the internal structure of minimal flows (actions of groups on compact Hausdorff spaces for which every orbit is dense) and their homomorphisms (continuous equivariant maps). Various classes of minimal flows (equicontinuous, distal, point distal) are intensively studied, and a general structure theorem is obtained. Another theme is the ``universal'' approach - entire classes of minimal flows are studied, rather than flows in isolation. This leads to the consideration of disjointness of flows, w
Robust stability in constrained predictive control through the Youla parameterisations
DEFF Research Database (Denmark)
Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad
2011-01-01
In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...... arguments on the loop consisting of the primary and dual Youla parameter. This is included in the MPC optimisation as a constraint on the induced gain of the optimisation variable. We illustrate the method with a numerical simulation example....
State Feedback with Memory for Constrained Switched Positive Linear Systems
Directory of Open Access Journals (Sweden)
Jinjin Liu
2015-04-01
Full Text Available In this paper, the stabilization problem in switched linear systems with time-varying delay under constrained state and control is investigated. The synthesis of bounded state-feedback controllers with memory ensures that a closed-loop state is positive and stable. Firstly, synthesis with a sign-restricted (nonnegative and negative control is considered for general switched systems; then, the stabilization issue under bounded controls including the asymmetrically bounded controls and states constraints are addressed. In addition, the results are extended to systems with interval and polytopic uncertainties. All the proposed conditions are solvable in term of linear programming. Numerical examples illustrate the applicability of the results.
Detection prospects for conformally constrained vector-portal dark matter
Sage, Frederick S; Dick, Rainer; Steele, T G; Mann, R B
2016-01-01
We work with a UV conformal U(1)' extension of the Standard Model, motivated by the hierarchy problem and recent collider anomalies. This model admits fermionic vector portal WIMP dark matter charged under the U(1)' gauge group. The asymptotically safe boundary conditions can be used to fix the coupling parameters, which allows the observed thermal relic abundance to constrain the mass of the dark matter particle. This highly restricts the parameter space, allowing strong predictions to be made. The parameter space of several UV conformal U(1)' scenarios will be explored, and both bounds and possible signals from direct and indirect detection observation methods will be discussed.
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
Application of constrained optimization to active control of aeroelastic response
Newsom, J. R.; Mukhopadhyay, V.
1981-01-01
Active control of aeroelastic response is a complex in which the designer usually tries to satisfy many criteria which are often conflicting. To further complicate the design problem, the state space equations describing this type of control problem are usually of high order, involving a large number of states to represent the flexible structure and unsteady aerodynamics. Control laws based on the standard Linear-Quadratic-Gaussian (LQG) method are of the same high order as the aeroelastic plant. To overcome this disadvantage of the LQG mode, an approach developed for designing low order optimal control laws which uses a nonlinear programming algorithm to search for the values of the control law variables that minimize a composite performance index, was extended to the constrained optimization problem. The method involves searching for the values of the control law variables that minimize a basic performance index while satisfying several inequality constraints that describe the design criteria. The method is applied to gust load alleviation of a drone aircraft.
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
Minimizing driver's irritation at a roadblock
Vleugels, C J J; Anthonissen, M J H; Seidman, T I
2013-01-01
Urban traffic is a logistic issue which can have many societal implications, especially when, due to a too high density of cars, the network of streets of a city becomes blocked, and consequently, pedestrians, bicycles, and cars start sharing the same traffic conditions potentially leading to high irritations (of people) and therefore to chaos. In this paper we focus our attention on a simple scenario: We model the driver's irritation induced by the presence of a roadblock. As a natural generalization, we extend the model for the two one-way crossroads traffic presented by M.E. Fouladvand and M. Nematollahi to that of a roadblock. Our discrete model defines and minimizes the total waiting time. The novelty lies in introducing the (total) driver's irritation and its minimization. Finally, we apply our model to a real-world situation: rush hour traffic in Hillegom, The Netherlands. We observe that minimizing the total waiting time and minimizing the total driver's irritation lead to different traffic light stra...
Minimally inconsistent reasoning in Semantic Web
Zhang, Xiaowang
2017-01-01
Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030
A trust-region and affine scaling algorithm for linearly constrained optimization
Institute of Scientific and Technical Information of China (English)
陈中文; 章祥荪
2002-01-01
A new trust-region and affine scaling algorithm for linearly constrained optimization is presentedin this paper. Under no nondegenerate assumption, we prove that any limit point of the sequence generatedby the new algorithm satisfies the first order necessary condition and there exists at least one limit point ofthe sequence which satisfies the second order necessary condition. Some preliminary numerical experiments are reported.
Sludge minimization technologies - an overview
Energy Technology Data Exchange (ETDEWEB)
Oedegaard, Hallvard
2003-07-01
The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)
Moldenhauer, Jacob
2009-01-01
We compare higher order gravity models to observational constraints from magnitude-redshift supernova data, distance to the last scattering surface of the CMB, and Baryon Acoustic Oscillations. We follow a recently proposed systematic approach to higher order gravity models based on minimal sets of curvature invariants, and select models that pass some physical acceptability conditions (free of ghost instabilities, real and positive propagation speeds, and free of separatrices). Models that satisfy these physical and observational constraints are found in this analysis and do provide fits to the data that are very close to those of the LCDM concordance model. However, we find that the limitation of the models considered here comes from the presence of superluminal mode propagations for the constrained parameter space of the models.
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi
2011-09-16
We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.
A note on constrained M-estimation and its recursive analog in multivariate linear regression models
Institute of Scientific and Technical Information of China (English)
RAO; Calyampudi; R
2009-01-01
In this paper,the constrained M-estimation of the regression coeffcients and scatter parameters in a general multivariate linear regression model is considered.Since the constrained M-estimation is not easy to compute,an up-dating recursion procedure is proposed to simplify the com-putation of the estimators when a new observation is obtained.We show that,under mild conditions,the recursion estimates are strongly consistent.In addition,the asymptotic normality of the recursive constrained M-estimators of regression coeffcients is established.A Monte Carlo simulation study of the recursion estimates is also provided.Besides,robustness and asymptotic behavior of constrained M-estimators are briefly discussed.
Minimally invasive surgery. Future developments.
1994-01-01
The rapid development of minimally invasive surgery means that there will be fundamental changes in interventional treatment. Technological advances will allow new minimally invasive procedures to be developed. Application of robotics will allow some procedures to be done automatically, and coupling of slave robotic instruments with virtual reality images will allow surgeons to perform operations by remote control. Miniature motors and instruments designed by microengineering could be introdu...
Influenza SIRS with minimal pneumonitis
Directory of Open Access Journals (Sweden)
Shruti Erramilli
2016-08-01
Full Text Available While systemic inflammatory response syndrome (SIRS, is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.
PRICING AND HEDGING OPTION UNDER PORTFOLIO CONSTRAINED
Institute of Scientific and Technical Information of China (English)
魏刚; 陈世平
2001-01-01
The authors employ convex analysis and stochastic control approach to study the question of hedging contingent claims with portfolio constrained to take values in a given closed, convex subset of RK, and extend the results of Gianmario Tessitore and Jerzy Zabczyk[6] on pricing options in multiasset and multinominal model.
Neuroevolutionary Constrained Optimization for Content Creation
DEFF Research Database (Denmark)
Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...
Constrained tri-sphere kinematic positioning system
Viola, Robert J
2010-12-14
A scalable and adaptable, six-degree-of-freedom, kinematic positioning system is described. The system can position objects supported on top of, or suspended from, jacks comprising constrained joints. The system is compatible with extreme low temperature or high vacuum environments. When constant adjustment is not required a removable motor unit is available.
Bound constrained quadratic programming via piecewise
DEFF Research Database (Denmark)
Madsen, Kaj; Nielsen, Hans Bruun; Pinar, M. C.
1999-01-01
of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes the algorithm and its implementation including estimation of lambda/sub 1/ , how to get a good starting point for the iteration, and up- and downdating of Cholesky factorization. Results of extensive...... testing and comparison with other methods for constrained QP are given....
Constrained target controllability of complex networks
Guo, Wei-Feng; Zhang, Shao-Wu; Wei, Ze-Gang; Zeng, Tao; Liu, Fei; Zhang, Jingsong; Wu, Fang-Xiang; Chen, Luonan
2017-06-01
It is of great theoretical interest and practical significance to study how to control a system by applying perturbations to only a few driver nodes. Recently, a hot topic of modern network researches is how to determine driver nodes that allow the control of an entire network. However, in practice, to control a complex network, especially a biological network, one may know not only the set of nodes which need to be controlled (i.e. target nodes), but also the set of nodes to which only control signals can be applied (i.e. constrained control nodes). Compared to the general concept of controllability, we introduce the concept of constrained target controllability (CTC) of complex networks, which concerns the ability to drive any state of target nodes to their desirable state by applying control signals to the driver nodes from the set of constrained control nodes. To efficiently investigate the CTC of complex networks, we further design a novel graph-theoretic algorithm called CTCA to estimate the ability of a given network to control targets by choosing driver nodes from the set of constrained control nodes. We extensively evaluate the CTC of numerous real complex networks. The results indicate that biological networks with a higher average degree are easier to control than biological networks with a lower average degree, while electronic networks with a lower average degree are easier to control than web networks with a higher average degree. We also show that our CTCA can more efficiently produce driver nodes for target-controlling the networks than existing state-of-the-art methods. Moreover, we use our CTCA to analyze two expert-curated bio-molecular networks and compare to other state-of-the-art methods. The results illustrate that our CTCA can efficiently identify proven drug targets and new potentials, according to the constrained controllability of those biological networks.
Minimal inversion, command matching and disturbance decoupling in multivariable systems
Seraji, H.
1989-01-01
The present treatment of the related problems of minimal inversion and perfect output control in linear multivariable systems uses a simple analytical expression for the inverse of a square multivariate system's transfer-function matrix to construct a minimal-order inverse of the system. Because the poles of the minimal-order inverse are the transmission zeros of the system, necessary and sufficient conditions for the inverse system's stability are simply stated in terms of the zero polynomial of the original system. A necessary and sufficient condition for the existence of the required controllers is that the plant zero polynomial be neither identical to zero nor unstable.
Pattern recognition constrains mantle properties, past and present
Atkins, S.; Rozel, A. B.; Valentine, A. P.; Tackley, P.; Trampert, J.
2015-12-01
Understanding and modelling mantle convection requires knowledge of many mantle properties, such as viscosity, chemical structure and thermal proerties such as radiogenic heating rate. However, many of these parameters are only poorly constrained. We demonstrate a new method for inverting present day Earth observations for mantle properties. We use neural networks to represent the posterior probability density functions of many different mantle properties given the present structure of the mantle. We construct these probability density functions by sampling a wide range of possible mantle properties and running forward simulations, using the convection code StagYY. Our approach is particularly powerful because of its flexibility. Our samples are selected in the prior space, rather than being targeted towards a particular observation, as would normally be the case for probabilistic inversion. This means that the same suite of simulations can be used for inversions using a wide range of geophysical observations without the need to resample. Our method is probabilistic and non-linear and is therefore compatible with non-linear convection, avoiding some of the limitations associated with other methods for inverting mantle flow. This allows us to consider the entire history of the mantle. We also need relatively few samples for our inversion, making our approach computationally tractable when considering long periods of mantle history. Using the present thermal and density structure of the mantle, we can constrain rheological and compositional parameters such as viscosity and yield stress. We can also use the present day mantle structure to make inferences about the initial conditions for convection 4.5 Gyr ago. We can constrain initial mantle conditions including the initial concentration of heat producing elements in the mantle and the initial thickness of primordial material at the CMB. Currently we use density and temperature structure for our inversions, but we can
Optimal experiment design revisited: fair, precise and minimal tomography
Nunn, J; Puentes, G; Lundeen, J S; Walmsley, I A
2009-01-01
Given an experimental set-up and a fixed number of measurements, how should one take data in order to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al. [arXiv:quant-ph/0411093v1]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. We find that these two designs are generally similar. Monte-Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.
Absorbing angles, Steiner minimal trees, and antipodality
Martini, Horst; de Wet, P Oloff; 10.1007/s10957-009-9552-1
2011-01-01
We give a new proof that a star $\\{op_i:i=1,...,k\\}$ in a normed plane is a Steiner minimal tree of its vertices $\\{o,p_1,...,p_k\\}$ if and only if all angles formed by the edges at o are absorbing [Swanepoel, Networks \\textbf{36} (2000), 104--113]. The proof is more conceptual and simpler than the original one. We also find a new sufficient condition for higher-dimensional normed spaces to share this characterization. In particular, a star $\\{op_i: i=1,...,k\\}$ in any CL-space is a Steiner minimal tree of its vertices $\\{o,p_1,...,p_k\\}$ if and only if all angles are absorbing, which in turn holds if and only if all distances between the normalizations $\\frac{1}{\\|p_i\\|}p_i$ equal 2. CL-spaces include the mixed $\\ell_1$ and $\\ell_\\infty$ sum of finitely many copies of $R^1$.
Towards synthesis of a minimal cell.
Forster, Anthony C; Church, George M
2006-01-01
Construction of a chemical system capable of replication and evolution, fed only by small molecule nutrients, is now conceivable. This could be achieved by stepwise integration of decades of work on the reconstitution of DNA, RNA and protein syntheses from pure components. Such a minimal cell project would initially define the components sufficient for each subsystem, allow detailed kinetic analyses and lead to improved in vitro methods for synthesis of biopolymers, therapeutics and biosensors. Completion would yield a functionally and structurally understood self-replicating biosystem. Safety concerns for synthetic life will be alleviated by extreme dependence on elaborate laboratory reagents and conditions for viability. Our proposed minimal genome is 113 kbp long and contains 151 genes. We detail building blocks already in place and major hurdles to overcome for completion.
Generation of Granulites Constrained by Thermal Modeling
Depine, G. V.; Andronicos, C. L.; Phipps-Morgan, J.
2006-12-01
The heat source needed to generate granulites facies metamorphism is still an unsolved problem in geology. There is a close spatial relationship between granulite terrains and extensive silicic plutonism, suggesting heat advection by melts is critical to their formation. To investigate the role of heat advection by melt in the generation of granulites we use numerical 1-D models which include the movement of melt from the base of the crust to the middle crust. The model is in part constrained by petrological observations from the Coast Plutonic Complex (CPC) in British Columbia, Canada at ~ 54° N where migmatite and granulite are widespread. The model takes into account time dependent heat conduction and advection of melts generated at the base of the crust. The model starts with a crust of 55 km, consistent with petrologic and geochemical data from the CPC. The lower crust is assumed to be amphibolite in composition, consistent with seismologic and geochemical constraints for the CPC. An initial geothermal gradient estimated from metamorphic P-T-t paths in this region is ~37°C/km, hotter than normal geothermal gradients. The parameters used for the model are a coefficient of thermal conductivity of 2.5 W/m°C, a density for the crust of 2700 kg/m3 and a heat capacity of 1170 J/Kg°C. Using the above starting conditions, a temperature of 1250°C is assumed for the mantle below 55 km, equivalent to placing asthenosphere in contact with the base of the crust to simulate delamination, basaltic underplating and/or asthenospheric exposure by a sudden steepening of slab. This condition at 55 km results in melting the amphibolite in the lower crust. Once a melt fraction of 10% is reached the melt is allowed to migrate to a depth of 13 km, while material at 13 km is displaced downwards to replace the ascending melts. The steady-state profile has a very steep geothermal gradient of more than 50°C/km from the surface to 13 km, consistent with the generation of andalusite
Likelihood Analysis of the Minimal AMSB Model arXiv
Bagnaschi, E.; Sakurai, K.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; Costa, J.C.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Luo, F.; Martínez Santos, D.; Olive, K.A.; Richards, A.; Weiglein, G.
We perform a likelihood analysis of the minimal Anomaly-Mediated Supersymmetry Breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that a wino-like or a Higgsino-like neutralino LSP, $m_{\\tilde \\chi^0_{1}}$, may provide the cold dark matter (DM) with similar likelihood. The upper limit on the DM density from Planck and other experiments enforces $m_{\\tilde \\chi^0_{1}} \\lesssim 3~TeV$ after the inclusion of Sommerfeld enhancement in its annihilations. If most of the cold DM density is provided by the $\\tilde \\chi_0^1$, the measured value of the Higgs mass favours a limited range of $\\tan \\beta \\sim 5$ (or for $\\mu > 0$, $\\tan \\beta \\sim 45$) but the scalar mass $m_0$ is poorly constrained. In the wino-LSP case, $m_{3/2}$ is constrained to about $900~TeV$ and ${m_{\\tilde \\chi^0_{1}}}$ to $2.9\\pm0.1~TeV$, whereas in the Higgsino-LSP case $m_{3/2}$ has just a lower limit $\\gtrsim 650TeV$ ($\\gtrsim 480TeV$) and $m_{\\tilde \\chi^0_{1}}$ is constrained to $1.12 ~(1.13) \\pm0.02...
Minimally Invasive Video-Assisted versus Minimally Invasive Nonendoscopic Thyroidectomy
Directory of Open Access Journals (Sweden)
Zdeněk Fík
2014-01-01
Full Text Available Minimally invasive video-assisted thyroidectomy (MIVAT and minimally invasive nonendoscopic thyroidectomy (MINET represent well accepted and reproducible techniques developed with the main goal to improve cosmetic outcome, accelerate healing, and increase patient’s comfort following thyroid surgery. Between 2007 and 2011, a prospective nonrandomized study of patients undergoing minimally invasive thyroid surgery was performed to compare advantages and disadvantages of the two different techniques. There were no significant differences in the length of incision to perform surgical procedures. Mean duration of hemithyroidectomy was comparable in both groups, but it was more time consuming to perform total thyroidectomy by MIVAT. There were more patients undergoing MIVAT procedures without active drainage in the postoperative course and we also could see a trend for less pain in the same group. This was paralleled by statistically significant decreased administration of both opiates and nonopiate analgesics. We encountered two cases of recurrent laryngeal nerve palsies in the MIVAT group only. MIVAT and MINET represent safe and feasible alternative to conventional thyroid surgery in selected cases and this prospective study has shown minimal differences between these two techniques.
Minimizing Costs Can Be Costly
Directory of Open Access Journals (Sweden)
Rasmus Rasmussen
2010-01-01
Full Text Available A quite common practice, even in academic literature, is to simplify a decision problem and model it as a cost-minimizing problem. In fact, some type of models has been standardized to minimization problems, like Quadratic Assignment Problems (QAPs, where a maximization formulation would be treated as a “generalized” QAP and not solvable by many of the specially designed softwares for QAP. Ignoring revenues when modeling a decision problem works only if costs can be separated from the decisions influencing revenues. More often than we think this is not the case, and minimizing costs will not lead to maximized profit. This will be demonstrated using spreadsheets to solve a small example. The example is also used to demonstrate other pitfalls in network models: the inability to generally balance the problem or allocate costs in advance, and the tendency to anticipate a specific type of solution and thereby make constraints too limiting when formulating the problem.
Minimal Marking: A Success Story
Directory of Open Access Journals (Sweden)
Anne McNeilly
2014-11-01
Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.
Cosmogenic photons strongly constrain UHECR source models
van Vliet, Arjen
2016-01-01
With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.
Cosmogenic photons strongly constrain UHECR source models
van Vliet, Arjen
2017-03-01
With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.
A constrained supersymmetric left-right model
Hirsch, Martin; Opferkuch, Toby; Porod, Werner; Staub, Florian
2016-01-01
We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model's capability to explain current anomalies observed at the LHC.
Cosmogenic photons strongly constrain UHECR source models
Directory of Open Access Journals (Sweden)
van Vliet Arjen
2017-01-01
Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.
Global marine primary production constrains fisheries catches.
Chassot, Emmanuel; Bonhommeau, Sylvain; Dulvy, Nicholas K; Mélin, Frédéric; Watson, Reg; Gascuel, Didier; Le Pape, Olivier
2010-04-01
Primary production must constrain the amount of fish and invertebrates available to expanding fisheries; however the degree of limitation has only been demonstrated at regional scales to date. Here we show that phytoplanktonic primary production, estimated from an ocean-colour satellite (SeaWiFS), is related to global fisheries catches at the scale of Large Marine Ecosystems, while accounting for temperature and ecological factors such as ecosystem size and type, species richness, animal body size, and the degree and nature of fisheries exploitation. Indeed we show that global fisheries catches since 1950 have been increasingly constrained by the amount of primary production. The primary production appropriated by current global fisheries is 17-112% higher than that appropriated by sustainable fisheries. Global primary production appears to be declining, in some part due to climate variability and change, with consequences for the near future fisheries catches.
CONSTRAINED SPECTRAL CLUSTERING FOR IMAGE SEGMENTATION
Sourati, Jamshid; Brooks, Dana H.; Dy, Jennifer G.; Erdogmus, Deniz
2013-01-01
Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate the algorithm on general and medical images to show that the segmentation results will improve using constrained clustering even if one works with a subset of pixels. Furthermore, this happens more efficiently when pixels to be labeled are selected actively. PMID:24466500
Doubly Constrained Robust Blind Beamforming Algorithm
Directory of Open Access Journals (Sweden)
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Constraining neutron star matter with Quantum Chromodynamics
Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi
2014-01-01
In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.
Constraining neutron star matter with quantum chromodynamics
Energy Technology Data Exchange (ETDEWEB)
Kurkela, Aleksi [Physics Department, Theory Unit, CERN, CH-1211 Genève 23 (Switzerland); Fraga, Eduardo S.; Schaffner-Bielich, Jürgen [Institute for Theoretical Physics, Goethe University, D-60438 Frankfurt am Main (Germany); Vuorinen, Aleksi [Department of Physics and Helsinki Institute of Physics, P.O. Box 64, FI-00014 University of Helsinki (Finland)
2014-07-10
In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount—or even presence—of quark matter inside the stars.
Constraining Neutron Star Matter with Quantum Chromodynamics
Kurkela, Aleksi; Fraga, Eduardo S.; Schaffner-Bielich, Jürgen; Vuorinen, Aleksi
2014-07-01
In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount—or even presence—of quark matter inside the stars.
Optimal constrained layer damping with partial coverage
Marcelin, J.-L.; Trompette, Ph.; Smati, A.
1992-12-01
This paper deals with the optimal damping of beams constrained by viscoelastic layers when only one or several portions of the beam are covered. An efficient finite element model for dynamic analysis of such beams is used. The design variables are the dimensions and prescribed locations of the viscoelastic layers and the objective is the maximum viscoelastic damping factor. The method for nonlinear programming in structural optimization is the so-called method of moving asymptotes.
Capacity constrained assignment in spatial databases
DEFF Research Database (Denmark)
U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos;
2008-01-01
Given a point set P of customers (e.g., WiFi receivers) and a point set Q of service providers (e.g., wireless access points), where each q 2 Q has a capacity q.k, the capacity constrained assignment (CCA) is a matching M Q × P such that (i) each point q 2 Q (p 2 P) appears at most k times (at most...
CONSTRAINED SPECTRAL CLUSTERING FOR IMAGE SEGMENTATION
Sourati, Jamshid; Brooks, Dana H.; Dy, Jennifer G.; Erdogmus, Deniz
2012-01-01
Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate ...
Constraining RRc candidates using SDSS colours
Bányai, E; Molnár, L; Dobos, L; Szabó, R
2016-01-01
The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.
Cardinality constrained portfolio selection via factor models
Monge, Juan Francisco
2017-01-01
In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...
Mass minimization of a discrete regenerative fuel cell (RFC) system for on-board energy storage
Li, Xiaojin; Xiao, Yu; Shao, Zhigang; Yi, Baolian
RFC combined with solar photovoltaic (PV) array is the advanced technologic solution for on-board energy storage, e.g. land, sky, stratosphere and aerospace applications, due to its potential of achieving high specific energy. This paper focuses on mass modeling and calculation for a RFC system consisting of discrete electrochemical cell stacks (fuel cell and electrolyzer), together with fuel storage, a PV array, and a radiator. A nonlinear constrained optimization procedure is used to minimize the entire system mass, as well as to study the effect of operating conditions (e.g. current densities of fuel cell and electrolyzer) on the system mass. According to the state-of-the-art specific power of both electrochemical stacks, an energy storage system has been designed for the conditions of stratosphere applications and a rated power output of 12 kW. The calculation results show that the optimization of the current density of both stacks is of importance in designing the light weight on-board energy system.
Constraining the Ensemble Kalman Filter for improved streamflow forecasting
Maxwell, Deborah; Jackson, Bethanna; McGregor, James
2016-04-01
Data assimilation techniques such as the Kalman Filter and its variants are often applied to hydrological models with minimal state volume/capacity constraints. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this presentation, we investigate the effect of constraining the Ensemble Kalman Filter (EnKF) on forecast performance. An EnKF implementation with no constraints is compared to model output with no assimilation, followed by a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then a more tightly constrained implementation where flux as well as mass constraints are imposed to limit the rate of water movement within a state. A three year period (2008-2010) with no significant data gaps and representative of the range of flows observed over the fuller 1976-2010 record was selected for analysis. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Overall, neither the unconstrained nor the "typically" mass-constrained forecasts were significantly better than the non-filtered forecasts; in fact several were significantly degraded. Flux constraints (in conjunction with mass constraints) significantly improved the forecast performance of six events relative to all other implementations, while the remaining two events showed no significant difference in performance. We conclude that placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state updating and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, and find that this
Constrained and joint inversion on unstructured meshes
Doetsch, J.; Jordi, C.; Rieckh, V.; Guenther, T.; Schmelzbach, C.
2015-12-01
Unstructured meshes allow for inclusion of arbitrary surface topography, complex acquisition geometry and undulating geological interfaces in the inversion of geophysical data. This flexibility opens new opportunities for coupling different geophysical and hydrological data sets in constrained and joint inversions. For example, incorporating geological interfaces that have been derived from high-resolution geophysical data (e.g., ground penetrating radar) can add geological constraints to inversions of electrical resistivity data. These constraints can be critical for a hydrogeological interpretation of the inversion results. For time-lapse inversions of geophysical data, constraints can be derived from hydrological point measurements in boreholes, but it is difficult to include these hard constraints in the inversion of electrical resistivity monitoring data. Especially mesh density and the regularization footprint around the hydrological point measurements are important for an improved inversion compared to the unconstrained case. With the help of synthetic and field examples, we analyze how regularization and coupling operators should be chosen for time-lapse inversions constrained by point measurements and for joint inversions of geophysical data in order to take full advantage of the flexibility of unstructured meshes. For the case of constraining to point measurements, it is important to choose a regularization operator that extends beyond the neighboring cells and the uncertainty in the point measurements needs to be accounted for. For joint inversion, the choice of the regularization depends on the expected subsurface heterogeneity and the cell size of the parameter mesh.
HCV management in resource-constrained countries.
Lim, Seng Gee
2017-02-21
With the arrival of all-oral directly acting antiviral (DAA) therapy with high cure rates, the promise of hepatitis C virus (HCV) eradication is within closer reach. The availability of generic DAAs has improved access to countries with constrained resources. However, therapy is only one component of the HCV care continuum, which is the framework for HCV management from identifying patients to cure. The large number of undiagnosed HCV cases is the biggest concern, and strategies to address this are needed, as risk factor screening is suboptimal, detecting HCV confirmation through either reflex HCV RNA screening or ideally a sensitive point of care test are needed. HCV notification (e.g., Australia) may improve diagnosis (proportion of HCV diagnosed is 75%) and may lead to benefits by increasing linkage to care, therapy and cure. Evaluations for cirrhosis using non-invasive markers are best done with a biological panel, but they are only moderately accurate. In resource-constrained settings, only generic HCV medications are available, and a combination of sofosbuvir, ribavirin, ledipasvir or daclatasvir provides sufficient efficacy for all genotypes, but this is likely to be replaced with pangenetypic regimens such as sofosbuvir/velpatasvir and glecaprevir/pibrentaasvir. In conclusion, HCV management in resource-constrained settings is challenging on multiple fronts because of the lack of infrastructure, facilities, trained manpower and equipment. However, it is still possible to make a significant impact towards HCV eradication through a concerted effort by individuals and national organisations with domain expertise in this area.
Cosmicflows Constrained Local UniversE Simulations
Sorce, Jenny G; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M; Steinmetz, Matthias; Tully, R Brent; Pomarede, Daniel; Carlesi, Edoardo
2015-01-01
This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observatio...
An English language interface for constrained domains
Page, Brenda J.
1989-01-01
The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.
Constraining the mass of the Local Group
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan
2017-03-01
The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.
Constraining Source Redshift Distributions with Gravitational Lensing
Wittman, D
2012-01-01
We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that ~40 massive ($\\sigma_v=1200$ km/s) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to ~11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as $N_{lens}^0.5$, but the improvement slows as one is forced to use lenses further ...
The canonical equilibrium of constrained molecular models
Echenique, Pablo; García-Risueño, Pablo
2011-01-01
In order to increase the efficiency of the computer simulation of biological molecules, it is very common to impose holonomic constraints on the fastest degrees of freedom; normally bond lengths, but also possibly bond angles. However, as any other element that affects the physical model, the imposition of constraints must be assessed from the point of view of accuracy: both the dynamics and the equilibrium statistical mechanics are model-dependent, and they will be changed if constraints are used. In this review, we investigate the accuracy of constrained models at the level of the equilibrium statistical mechanics distributions produced by the different dynamics. We carefully derive the canonical equilibrium distributions of both the constrained and unconstrained dynamics, comparing the two of them by means of a "stiff" approximation to the latter. We do so both in the case of flexible and hard constraints, i.e., when the value of the constrained coordinates depends on the conformation and when it is a cons...
Restoration ecology: two-sex dynamics and cost minimization.
Directory of Open Access Journals (Sweden)
Ferenc Molnár
Full Text Available We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.
Directory of Open Access Journals (Sweden)
Zhenggang Du
2015-03-01
Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also
Chaplygin Gas of Tachyon Nature Imposed by Symmetry and Constrained via H(z) Data
Collodel, Lucas Gardai
2015-01-01
An action of general form is proposed for a Universe containing matter, radiation and dark energy. The latter is interpreted as a tachyon field non-minimally coupled to the scalar curvature. The Palatini approach is used when varying the action so the connection is given by a more generic form. Both the self-interaction potential and the non-minimally coupling function are obtained by constraining the system to present invariability under global point transformation of the fields (Noether Symmetry). The only possible solution is shown to be that of minimal coupling and constant potential (Chaplygin gas). The behavior of the dynamical properties of the system is compared to recent observational data, which infers that the tachyon field must indeed be dynamical.
Chaplygin gas of Tachyon Nature Imposed by Noether Symmetry and constrained via H(z) data
Gardai Collodel, Lucas; Medeiros Kremer, Gilberto
2016-04-01
An action of general form is proposed for a Universe containing matter, radiation and dark energy. The latter is interpreted as a tachyon field non-minimally coupled to the scalar curvature. The Palatini approach is used when varying the action so the connection is given by a more generic form. Both the self-interaction potential and the non-minimally coupling function are obtained by constraining the system to present invariability under global point transformation of the fields (Noether Symmetry). The only possible solution is shown to be that of minimal coupling and constant potential (Chaplygin gas). The behavior of the dynamical properties of the system is compared to recent observational data, which infers that the tachyon field must indeed be dynamical.
Viability conditions for a compartmentalized protometabolic system: a semi-empirical approach.
Directory of Open Access Journals (Sweden)
Gabriel Piedrafita
Full Text Available In this work we attempt to find out the extent to which realistic prebiotic compartments, such as fatty acid vesicles, would constrain the chemical network dynamics that could have sustained a minimal form of metabolism. We combine experimental and simulation results to establish the conditions under which a reaction network with a catalytically closed organization (more specifically, an (M,R-system would overcome the potential problem of self-suffocation that arises from the limited accessibility of nutrients to its internal reaction domain. The relationship between the permeability of the membrane, the lifetime of the key catalysts and their efficiency (reaction rate enhancement turns out to be critical. In particular, we show how permeability values constrain the characteristic time scale of the bounded protometabolic processes. From this concrete and illustrative example we finally extend the discussion to a wider evolutionary context.
Non-minimal supersymmetric models. LHC phenomenolgy and model discrimination
Energy Technology Data Exchange (ETDEWEB)
Krauss, Manuel Ernst
2015-12-18
It is generally agreed upon the fact that the Standard Model of particle physics can only be viewed as an effective theory that needs to be extended as it leaves some essential questions unanswered. The exact realization of the necessary extension is subject to discussion. Supersymmetry is among the most promising approaches to physics beyond the Standard Model as it can simultaneously solve the hierarchy problem and provide an explanation for the dark matter abundance in the universe. Despite further virtues like gauge coupling unification and radiative electroweak symmetry breaking, minimal supersymmetric models cannot be the ultimate answer to the open questions of the Standard Model as they still do not incorporate neutrino masses and are besides heavily constrained by LHC data. This does, however, not derogate the beauty of the concept of supersymmetry. It is therefore time to explore non-minimal supersymmetric models which are able to close these gaps, review their consistency, test them against experimental data and provide prospects for future experiments. The goal of this thesis is to contribute to this process by exploring an extraordinarily well motivated class of models which bases upon a left-right symmetric gauge group. While relaxing the tension with LHC data, those models automatically include the ingredients for neutrino masses. We start with a left-right supersymmetric model at the TeV scale in which scalar SU(2){sub R} triplets are responsible for the breaking of left-right symmetry as well as for the generation of neutrino masses. Although a tachyonic doubly-charged scalar is present at tree-level in this kind of models, we show by performing the first complete one-loop evaluation that it gains a real mass at the loop level. The constraints on the predicted additional charged gauge bosons are then evaluated using LHC data, and we find that we can explain small excesses in the data of which the current LHC run will reveal if they are actual new
Minimally Invasive Approach of a Retrocaval Ureter
Pinheiro, Hugo; Ferronha, Frederico; Morales, Jorge; Campos Pinheiro, Luís
2016-01-01
The retrocaval ureter is a rare congenital entity, classically managed with open pyeloplasty techniques. The experience obtained with the laparoscopic approach of other more frequent causes of ureteropelvic junction (UPJ) obstruction has opened the method for the minimally invasive approach of the retrocaval ureter. In our paper, we describe a clinical case of a right retrocaval ureter managed successfully with laparoscopic dismembered pyeloplasty. The main standpoints of the procedure are described. Our results were similar to others published by other urologic centers, which demonstrates the safety and feasibility of the procedure for this condition. PMID:27635277
Resin composites in minimally invasive dentistry.
Jacobsen, Thomas
2004-01-01
The concept of minimally invasive dentistry will provide favorable conditions for the use of composite resin. However, a number of factors must be considered when placing composite resins in conservatively prepared cavities, including: aspects on the adaptation of the composite resin to the cavity walls; the use of adhesives; and techniques for obtaining adequate proximal contacts. The clinician must also adopt an equally conservative approach when treating failed restorations. The quality of the composite resin restoration will not only be affected by the outline form of the preparation but also by the clinician's technique and understanding of the materials.
Minimal Invasive Decompression for Lumbar Spinal Stenosis
Directory of Open Access Journals (Sweden)
Victor Popov
2012-01-01
Full Text Available Lumbar spinal stenosis is a common condition in elderly patients and may lead to progressive back and leg pain, muscular weakness, sensory disturbance, and/or problems with ambulation. Multiple studies suggest that surgical decompression is an effective therapy for patients with symptomatic lumbar stenosis. Although traditional lumbar decompression is a time-honored procedure, minimally invasive procedures are now available which can achieve the goals of decompression with less bleeding, smaller incisions, and quicker patient recovery. This paper will review the technique of performing ipsilateral and bilateral decompressions using a tubular retractor system and microscope.
Minimal hepatic encephalopathy matters in daily life
Institute of Scientific and Technical Information of China (English)
Jasmohan S Bajaj
2008-01-01
Minimal hepatic encephalopathy is a neuro-cognitive dysfunction which occurs in an epidemic proportion of cirrhotic patients, estimated as high as 80% of the population tested. It is characterized by a specific, complex cognitive dysfunction which is independent of sleep dysfunction or problems with overall intelligence. Although named "minimal", minimal hepatic encephalopathy (MHE) can have a far-reaching impact on quality of life, ability to function in daily life and progression to overt hepatic encephalopathy. Importantly, MHE has a profound negative impact on the ability to drive a car and may be a significant factor behind motor vehicle accidents. A crucial aspect of the clinical care of MHE patients is their driving history, which is often ignored in routine care and can add a vital dimension to the overall disease assessment. Driving history should be an integral part of care in patients with MHE. The lack of specific signs and symptoms, the preserved communication skills and lack of insight make MHE a difficult condition to diagnose. Diagnostic strategies for MHE abound, but are usually limited by financial, normative or time constraints. Recent studies into the inhibitory control and critical flicker frequency tests are encouraging since these tests can increase the rates of MHE diagnosis without requiring a psychologist. Although testing for MHE and subsequent therapy is not standard of care at this time, it is important to consider this in cirrhotics in order to improve their ability to live their life to the fullest.
Opportunity Loss Minimization and Newsvendor Behavior
Directory of Open Access Journals (Sweden)
Xinsheng Xu
2017-01-01
Full Text Available To study the decision bias in newsvendor behavior, this paper introduces an opportunity loss minimization criterion into the newsvendor model with backordering. We apply the Conditional Value-at-Risk (CVaR measure to hedge against the potential risks from newsvendor’s order decision. We obtain the optimal order quantities for a newsvendor to minimize the expected opportunity loss and CVaR of opportunity loss. It is proven that the newsvendor’s optimal order quantity is related to the density function of market demand when the newsvendor exhibits risk-averse preference, which is inconsistent with the results in Schweitzer and Cachon (2000. The numerical example shows that the optimal order quantity that minimizes CVaR of opportunity loss is bigger than expected profit maximization (EPM order quantity for high-profit products and smaller than EPM order quantity for low-profit products, which is different from the experimental results in Schweitzer and Cachon (2000. A sensitivity analysis of changing the operation parameters of the two optimal order quantities is discussed. Our results confirm that high return implies high risk, while low risk comes with low return. Based on the results, some managerial insights are suggested for the risk management of the newsvendor model with backordering.
Minimal Flavor Constraints for Technicolor
DEFF Research Database (Denmark)
Sakuma, Hidenori; Sannino, Francesco
2010-01-01
We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas...
Dubin's Minimal Linkage Construct Revisited.
Rogers, Donald P.
This paper contains a theoretical analysis and empirical study that support the major premise of Robert Dubin's minimal-linkage construct-that restricting communication links increases organizational stability. The theoretical analysis shows that fewer communication links are associated with less uncertainty, more redundancy, and greater…
Minimal Surfaces for Hitchin Representations
DEFF Research Database (Denmark)
Li, Qiongling; Dai, Song
2016-01-01
Given a reductive representation $\\rho: \\pi_1(S)\\rightarrow G$, there exists a $\\rho$-equivariant harmonic map $f$ from the universal cover of a fixed Riemann surface $\\Sigma$ to the symmetric space $G/K$ associated to $G$. If the Hopf differential of $f$ vanishes, the harmonic map is then minimal...
Acquiring minimally invasive surgical skills
Hiemstra, Ellen
2012-01-01
Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.
Directory of Open Access Journals (Sweden)
Madjid Mirzavaziri
2007-01-01
norms ‖⋅‖1 and ‖⋅‖2 on ℂn such that N(A=max{‖Ax‖2:‖x‖1=1, x∈ℂn} for all A∈ℳn. This may be regarded as an extension of a known result on characterization of minimal algebra norms.
Implications of minimally invasive therapy
Banta, H.D.; Schersten, T.; Jonsson, E.
1993-01-01
The field of minimally invasive therapy (MIT) raises many important issues for the future of health care. It seems inevitable that MIT will replace much conventional surgery. This trend is good for society and good for patients. The health care system, however, may find the change disruptive. The
Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization
Fornasier, Massimo
2009-01-01
This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.
Technical Note: Methods for interval constrained atmospheric inversion of methane
Directory of Open Access Journals (Sweden)
J. Tang
2010-08-01
Full Text Available Three interval constrained methods, including the interval constrained Kalman smoother, the interval constrained maximum likelihood ensemble smoother and the interval constrained ensemble Kalman smoother are developed to conduct inversions of atmospheric trace gas methane (CH_{4}. The negative values of fluxes in an unconstrained inversion are avoided in the constrained inversion. In a multi-year inversion experiment using pseudo observations derived from a forward transport simulation with known fluxes, the interval constrained fixed-lag Kalman smoother presents the best results, followed by the interval constrained fixed-lag ensemble Kalman smoother and the interval constrained maximum likelihood ensemble Kalman smoother. Consistent uncertainties are obtained for the posterior fluxes with these three methods. This study provides alternatives of the variable transform method to deal with interval constraints in atmospheric inversions.
Antifungal susceptibility testing method for resource constrained laboratories
Directory of Open Access Journals (Sweden)
Khan S
2006-01-01
Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.
Security-constrained unit commitment with flexible operating modes
Lu, Bo
The electricity industry throughout the world, which has long been dominated by vertically integrated utilities, is facing enormous challenges. To enhance the competition in electricity industry, vertically integrated utilities are evolving into a distributed and competitive industry in which market forces drive the price of electricity and possibly reduce the net cost of supplying electrical loads through increased competition. To excel in the competition, generation companies (GENCOs) will acquire additional generating units with flexible operating capability which allows a timely response to the continuous changes in power system conditions. This dissertation considers the short-term scheduling of generating units with flexible modes of operation in security-constrained unit commitment (SCUC). Among the units considered in this study are combined cycle units, fuel switching/blending units, photovoltaic/battery system, pumped-storage units, and cascaded hydro units. The proposed security-constrained unit commitment solution will include a detailed model of transmission system which could impact the short-term scheduling of units with flexible operation modes.
Performance potential of mechanical ventilation systems with minimized pressure loss
DEFF Research Database (Denmark)
Terkildsen, Søren; Svendsen, Svend
2013-01-01
ventilation systems with minimal pressure loss and minimal energy use. This can provide comfort ventilation and avoid overheating through increased ventilation and night cooling. Based on this concept, a test system was designed for a fictive office building and its performance was documented using building...... simulations that quantify fan power consumption, heating demand and indoor environmental conditions. The system was designed with minimal pressure loss in the duct system and heat exchanger. Also, it uses state-of-the-art components such as electrostatic precipitators, diffuse ceiling inlets and demand......-control ventilation with static pressure set-point reset. All the equipment has been designed to minimize pressure losses and thereby the fan power needed to operate the system. The total pressure loss is 30-75 Pa depending on the operating conditions. The annual average specific fan power is 330 J/m3 of airflow rate...
Optimization for entransy dissipation minimization in heat exchanger
Institute of Scientific and Technical Information of China (English)
XIA ShaoJun; CHEN LinGen; SUN FengRui
2009-01-01
A common of two-fluid flow heat exchanger, in which the heat transfer between high-and low-temperature sides obeys Newton's law [q∝△(T)], is studied in this paper. By taking entransy dissipation minimization as optimization objective, the optimum parameter distributions in the heat ex-changer are derived by using optimal control theory under the condition of fixed heat load. The condition corresponding to the minimum entransy dissipation is that corresponding to a constant heat flux density. Three kinds of heat exchangers, including parallel flow, condensing flow and counter-flow, are considered, and the results show that only the counter-flow heat exchanger can realize the entransy dissipation minimization in the heat transfer process. The obtained results for entransy dissipation minimization are also compared with those obtained for entropy generation minimization by numerical examples.
21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888.3780 Section 888.3780 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...
21 CFR 888.3230 - Finger joint polymer constrained prosthesis.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device intended... generic type of device includes prostheses that consist of a single flexible across-the-joint...
Cascading Constrained 2-D Arrays using Periodic Merging Arrays
DEFF Research Database (Denmark)
Forchhammer, Søren; Laursen, Torben Vaarby
2003-01-01
We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes....... Numerical results for the capacities are presented....
Late de novo minimal change disease in a renal allograft
Madhan Krishan; Temple-Camp Cynric
2009-01-01
Among the causes of the nephrotic syndrome in renal allografts, minimal change disease is a rarity with only few cases described in the medical literature. Most cases described have occurred early in the post-transplant course. There is no established treatment for the condition but prognosis is favorable. We describe a case of minimal change disease that developed 8 years after a successful transplantation of a renal allograft in a middle-aged woman. The nephrotic syndrome was accompanied by...
A minimally invasive approach for a compromised treatment plan.
Maibaum, Wayne W
2016-01-01
A primary goal in dentistry is the execution of appropriate treatment plans that are minimally invasive and maintainable. However, it is sometimes necessary to repair existing dental restorations or revise treatment plans to accommodate changes in a patient's condition. In the present case, a patient who was satisfied with a removable partial overdenture lost a critical abutment tooth. A creative, minimally invasive approach enabled the patient to keep his existing partial prosthesis and avoid the need for a full reconstruction or complete denture.
Harm minimization among teenage drinkers
DEFF Research Database (Denmark)
Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup
2007-01-01
AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13......-16-year-olds (n = 93) as well as 26 semistructured interviews with small self-selected friendship groups of 15-16-year-olds (n = 32). FINDINGS: The teenagers participating in the present study were more concerned about social than health risks. The informants monitored their own level of intoxication....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...
On the Hopcroft's minimization algorithm
Paun, Andrei
2007-01-01
We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.
A Minimally Symmetric Higgs Boson
Low, Ian
2014-01-01
Models addressing the naturalness of a light Higgs boson typically employ symmetries, either bosonic or fermionic, to stabilize the Higgs mass. We consider a setup with the minimal amount of symmetries: four shift symmetries acting on the four components of the Higgs doublet, subject to the constraints of linearly realized SU(2)xU(1) electroweak symmetry. Up to terms that explicitly violate the shift symmetries, the effective lagrangian can be derived, irrespective of the spontaneously broken group G in the ultraviolet, and is universal in all models where the Higgs arises as a pseudo-Nambu-Goldstone boson (PNGB). Very high energy scatterings of vector bosons could provide smoking gun signals of a minimally symmetric Higgs boson.
Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry
2011-01-01
A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.
Can Neutron stars constrain Dark Matter?
DEFF Research Database (Denmark)
Kouvaris, Christoforos; Tinyakov, Peter
2010-01-01
We argue that observations of old neutron stars can impose constraints on dark matter candidates even with very small elastic or inelastic cross section, and self-annihilation cross section. We find that old neutron stars close to the galactic center or in globular clusters can maintain a surface...... temperature that could in principle be detected. Due to their compactness, neutron stars can acrete WIMPs efficiently even if the WIMP-to-nucleon cross section obeys the current limits from direct dark matter searches, and therefore they could constrain a wide range of dark matter candidates....
Energetic Materials Optimization via Constrained Search
2015-06-01
space.18–20 LCAP and VP-DFT interpolate continuously between the Hamiltonians of various chemical species. Furthermore, recently an investigation into...Computational Chemistry Protocol All quantum- mechanical computations were performed using Gaussian 09.24 All geometries were preoptimized with B3LYP/3-21G under...via nonnegative Lagrange multipliers λ ∈ R3+ for the 3 constraints to the augmented Lagrangian function L(x, λ) := P (x) − λC(x) as a constrained min
Constrained inflaton due to a complex scalar
Energy Technology Data Exchange (ETDEWEB)
Budhi, Romy H. S. [Physics Department, Gadjah Mada University,Yogyakarta 55281 (Indonesia); Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Kashiwase, Shoichi; Suematsu, Daijiro [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan)
2015-09-14
We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.
Quantization of soluble classical constrained systems
Energy Technology Data Exchange (ETDEWEB)
Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)
2014-12-15
The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.
Neuroevolutionary Constrained Optimization for Content Creation
DEFF Research Database (Denmark)
Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution......; Compositional Pattern-Producing Networks (CPPNs) which represent the spaceship’s design are trained via a constraint-based evolutionary algorithm. Results obtained in a number of evolutionary runs using a set of constraints and objectives show that the generated spaceships perform well in movement, combat...
Charged particles constrained to a curved surface
Müller, Thomas
2012-01-01
We study the motion of charged particles constrained to arbitrary two-dimensional curved surfaces but interacting in three-dimensional space via the Coulomb potential. To speed-up the interaction calculations, we use the parallel compute capability of the Compute Unified Device Architecture (CUDA) of todays graphics boards. The particles and the curved surfaces are shown using the Open Graphics Library (OpenGL). The paper is intended to give graduate students, who have basic experiences with electrostatics and differential geometry, a deeper understanding in charged particle interactions and a short introduction how to handle a many particle system using parallel computing on a single home computer
Constraining Milky Way mass with Hypervelocity Stars
Fragione, Giacomo
2016-01-01
We show that hypervelocity stars (HVSs) ejected from the center of the Milky Way galaxy can be used to constrain the mass of its halo. The asymmetry in the radial velocity distribution of halo stars due to escaping HVSs depends on the halo potential (escape speed) as long as the round trip orbital time is shorter than the stellar lifetime. Adopting a characteristic HVS travel time of $300$ Myr, which corresponds to the average mass of main sequence HVSs ($3.2$ M$_{\\odot}$), we find that current data favors a mass for the Milky Way in the range $(1.2$-$1.7)\\times 10^{12} \\mathrm{M}_\\odot$.
Lifespan theorem for constrained surface diffusion flows
McCoy, James; Williams, Graham; 10.1007/s00209-010-0720-7
2012-01-01
We consider closed immersed hypersurfaces in $\\R^{3}$ and $\\R^4$ evolving by a class of constrained surface diffusion flows. Our result, similar to earlier results for the Willmore flow, gives both a positive lower bound on the time for which a smooth solution exists, and a small upper bound on a power of the total curvature during this time. By phrasing the theorem in terms of the concentration of curvature in the initial surface, our result holds for very general initial data and has applications to further development in asymptotic analysis for these flows.
Integrating job scheduling and constrained network routing
DEFF Research Database (Denmark)
Gamst, Mette
2010-01-01
This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...... of geographically distributed resources connected through an optical network work together for solving large problems. A number of heuristics are proposed along with an exact solution approach based on Dantzig-Wolfe decomposition. The latter has some performance difficulties while the heuristics solve all instances...
Weight-Constrained Minimum Spanning Tree Problem
Henn, Sebastian Tobias
2007-01-01
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...
Principle of minimal work fluctuations.
Xiao, Gaoyang; Gong, Jiangbin
2015-08-01
Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].
Risk minimization and portfolio diversification
Farzad Pourbabaee; Minsuk Kwak; Traian A. Pirvu
2014-01-01
We consider the problem of minimizing capital at risk in the Black-Scholes setting. The portfolio problem is studied given the possibility that a correlation constraint between the portfolio and a financial index is imposed. The optimal portfolio is obtained in closed form. The effects of the correlation constraint are explored; it turns out that this portfolio constraint leads to a more diversified portfolio.
Outcomes After Minimally Invasive Esophagectomy
Luketich, James D.; Pennathur, Arjun; Awais, Omar; Levy, Ryan M.; Keeley, Samuel; Shende, Manisha; Christie, Neil A.; Weksler, Benny; Landreneau, Rodney J.; Abbas, Ghulam; Schuchert, Matthew J.; Nason, Katie S.
2014-01-01
Background Esophagectomy is a complex operation and is associated with significant morbidity and mortality. In an attempt to lower morbidity, we have adopted a minimally invasive approach to esophagectomy. Objectives Our primary objective was to evaluate the outcomes of minimally invasive esophagectomy (MIE) in a large group of patients. Our secondary objective was to compare the modified McKeown minimally invasive approach (videothoracoscopic surgery, laparoscopy, neck anastomosis [MIE-neck]) with our current approach, a modified Ivor Lewis approach (laparoscopy, videothoracoscopic surgery, chest anastomosis [MIE-chest]). Methods We reviewed 1033 consecutive patients undergoing MIE. Elective operation was performed on 1011 patients; 22 patients with nonelective operations were excluded. Patients were stratified by surgical approach and perioperative outcomes analyzed. The primary endpoint studied was 30-day mortality. Results The MIE-neck was performed in 481 (48%) and MIE-Ivor Lewis in 530 (52%). Patients undergoing MIE-Ivor Lewis were operated in the current era. The median number of lymph nodes resected was 21. The operative mortality was 1.68%. Median length of stay (8 days) and ICU stay (2 days) were similar between the 2 approaches. Mortality rate was 0.9%, and recurrent nerve injury was less frequent in the Ivor Lewis MIE group (P < 0.001). Conclusions MIE in our center resulted in acceptable lymph node resection, postoperative outcomes, and low mortality using either an MIE-neck or an MIE-chest approach. The MIE Ivor Lewis approach was associated with reduced recurrent laryngeal nerve injury and mortality of 0.9% and is now our preferred approach. Minimally invasive esophagectomy can be performed safely, with good results in an experienced center. PMID:22668811
Minimal Length, Measurability and Gravity
Directory of Open Access Journals (Sweden)
Alexander Shalyt-Margolin
2016-03-01
Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.
Optimizing Processes to Minimize Risk
Loyd, David
2017-01-01
NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.
BDD Minimization for Approximate Computing
Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf
2016-01-01
We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...
A NEW SMOOTHING APPROXIMATION METHOD FOR SOLVING BOX CONSTRAINED VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
Chang-feng Ma; Guo-ping Liang; Shao-peng Liu
2002-01-01
In this paper, we first give a smoothing approximation function of nonsmooth system based on box constrained variational inequalities and then present a new smoothing approximation algorithm. Under suitable conditions,we show that the method is globally and superlinearly convergent. A few numerical results are also reported in the paper.
Construction of the solution of variational equations for constrained Birkhoffian systems
Institute of Scientific and Technical Information of China (English)
张毅
2002-01-01
In this paper we present the variational equations of constrained Birkhoffian systems and study their solution. Itis proven that, under some conditions, a particular solution of variational equations can be obtained by using a firstintegral. At the end of the paper, an example is given to illustrate the application of the results.
Institute of Scientific and Technical Information of China (English)
Xiang-li Li; Hong-wei Liu; Chang-he Liu
2011-01-01
In this paper, by analyzing the propositions of solution of the convex quadratic programming with nonnegative constraints, we propose a feasible decomposition method for constrained equations. Under mild conditions, the global convergence can be obtained. The method is applied to the complementary problems. Numerical results are also given to show the efficiency of the proposed method.
Dark matter candidates in the constrained exceptional supersymmetric standard model
Athron, P.; Thomas, A. W.; Underwood, S. J.; White, M. J.
2017-02-01
The exceptional supersymmetric standard model is a low energy alternative to the minimal supersymmetric standard model (MSSM) with an extra U (1 ) gauge symmetry and three generations of matter filling complete 27-plet representations of E6. This provides both new D and F term contributions that raise the Higgs mass at tree level, and a compelling solution to the μ -problem of the MSSM by forbidding such a term with the extra U (1 ) symmetry. Instead, an effective μ -term is generated from the vacuum expectation value of an SM singlet which breaks the extra U (1 ) symmetry at low energies, giving rise to a massive Z'. We explore the phenomenology of the constrained version of this model in substantially more detail than has been carried out previously, performing a ten dimensional scan that reveals a large volume of viable parameter space. We classify the different mechanisms for generating the measured relic density of dark matter found in the scan, including the identification of a new mechanism involving mixed bino/inert-Higgsino dark matter. We show which mechanisms can evade the latest direct detection limits from the LUX 2016 experiment. Finally we present benchmarks consistent with all the experimental constraints and which could be discovered with the XENON1T experiment.
Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model
Directory of Open Access Journals (Sweden)
Xiaoyang Zhou
2016-01-01
Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.
Grape Composition under Abiotic Constrains: Water Stress and Salinity.
Mirás-Avalos, José M; Intrigliolo, Diego S
2017-01-01
Water stress and increasing soil salt concentration represent the most common abiotic constrains that exert a negative impact on Mediterranean vineyards performance. However, several studies have proven that deficit irrigation strategies are able to improve grape composition. In contrast, irrigation with saline waters negatively affected yield and grape composition, although the magnitude of these effects depended on the cultivar, rootstock, phenological stage when water was applied, as well as on the salt concentration in the irrigation water. In this context, agronomic practices that minimize these effects on berry composition and, consequently, on wine quality must be achieved. In this paper, we briefly reviewed the main findings obtained regarding the effects of deficit irrigation strategies, as well as irrigation with saline water, on the berry composition of both red and white cultivars, as well as on the final wine. A meta-analysis was performed using published data for red and white varieties; a general liner model accounting for the effects of cultivar, rootstock, and midday stem water potential was able to explain up to 90% of the variability in the dataset, depending on the selected variable. In both red and white cultivars, berry weight, must titratable acidity and pH were fairly well simulated, whereas the goodness-of-fit for wine attributes was better for white cultivars.
Grape Composition under Abiotic Constrains: Water Stress and Salinity
Directory of Open Access Journals (Sweden)
José M. Mirás-Avalos
2017-05-01
Full Text Available Water stress and increasing soil salt concentration represent the most common abiotic constrains that exert a negative impact on Mediterranean vineyards performance. However, several studies have proven that deficit irrigation strategies are able to improve grape composition. In contrast, irrigation with saline waters negatively affected yield and grape composition, although the magnitude of these effects depended on the cultivar, rootstock, phenological stage when water was applied, as well as on the salt concentration in the irrigation water. In this context, agronomic practices that minimize these effects on berry composition and, consequently, on wine quality must be achieved. In this paper, we briefly reviewed the main findings obtained regarding the effects of deficit irrigation strategies, as well as irrigation with saline water, on the berry composition of both red and white cultivars, as well as on the final wine. A meta-analysis was performed using published data for red and white varieties; a general liner model accounting for the effects of cultivar, rootstock, and midday stem water potential was able to explain up to 90% of the variability in the dataset, depending on the selected variable. In both red and white cultivars, berry weight, must titratable acidity and pH were fairly well simulated, whereas the goodness-of-fit for wine attributes was better for white cultivars.
Reservoir Operation to Minimize Sedimentation
Directory of Open Access Journals (Sweden)
Dyah Ari Wulandari
2013-10-01
Full Text Available The Wonogiri Reservoir capacity decreases rapidly, caused by serious sedimentation problems. In 2007, JICA was proposed a sediment storage reservoir with a new spillway for the purpose of sediment flushing / sluicing from The Keduang River. Due to the change of reservoir storage and change of reservoir system, it requires a sustainable reservoir operation technique. This technique is aimed to minimize the deviation between the input and output of sediments. The main objective of this study is to explore the optimal Wonogiri reservoir operation by minimizing the sediment trap. The CSUDP incremental dynamic programming procedure is used for the model optimization. This new operating rules will also simulate a five years operation period, to show the effect of the implemented techniques. The result of the study are the newly developed reservoir operation system has many advantages when compared to the actual operation system and the disadvantage of this developed system is that the use is mainly designed for a wet hydrologic year, since its performance for the water supply is lower than the actual reservoir operations.Doi: 10.12777/ijse.6.1.16-23 [How to cite this article: Wulandari, D.A., Legono, D., and Darsono, S., 2014. Reservoir Operation to Minimize Sedimentation. International Journal of Science and Engineering, 5(2,61-65. Doi: 10.12777/ijse.6.1.16-23] Normal 0 false false false EN-US X-NONE X-NONE
Minimally invasive paediatric cardiac surgery.
Bacha, Emile; Kalfa, David
2014-01-01
The concept of minimally invasive surgery for congenital heart disease in paediatric patients is broad, and has the aim of reducing the trauma of the operation at each stage of management. Firstly, in the operating room using minimally invasive incisions, video-assisted thoracoscopic and robotically assisted surgery, hybrid procedures, image-guided intracardiac surgery, and minimally invasive cardiopulmonary bypass strategies. Secondly, in the intensive-care unit with neuroprotection and 'fast-tracking' strategies that involve early extubation, early hospital discharge, and less exposure to transfused blood products. Thirdly, during postoperative mid-term and long-term follow-up by providing the children and their families with adequate support after hospital discharge. Improvement of these strategies relies on the development of new devices, real-time multimodality imaging, aids to instrument navigation, miniaturized and specialized instrumentation, robotic technology, and computer-assisted modelling of flow dynamics and tissue mechanics. In addition, dedicated multidisciplinary co-ordinated teams involving congenital cardiac surgeons, perfusionists, intensivists, anaesthesiologists, cardiologists, nurses, psychologists, and counsellors are needed before, during, and after surgery to go beyond apparent technological and medical limitations with the goal to 'treat more while hurting less'.
DEFF Research Database (Denmark)
Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian
2013-01-01
Novelty search is a recent algorithm geared to explore search spaces without regard to objectives; minimal criteria novelty search is a variant of this algorithm for constrained search spaces. For large search spaces with multiple constraints, however, it is hard to find a set of feasible...... individuals that is both large and diverse. In this paper, we present two new methods of novelty search for constrained spaces, Feasible-Infeasible Novelty Search and Feasible-Infeasible Dual Novelty Search. Both algorithms keep separate populations of feasible and infeasible individuals, inspired by the FI-2...... diverse sets of feasible strategy game maps than existing algorithms. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. It is also shown that the proposed enhancement of offspring boosting increases performance in all cases....
Constrained reaction volume approach for studying chemical kinetics behind reflected shock waves
Hanson, Ronald K.
2013-09-01
We report a constrained-reaction-volume strategy for conducting kinetics experiments behind reflected shock waves, achieved in the present work by staged filling in a shock tube. Using hydrogen-oxygen ignition experiments as an example, we demonstrate that this strategy eliminates the possibility of non-localized (remote) ignition in shock tubes. Furthermore, we show that this same strategy can also effectively eliminate or minimize pressure changes due to combustion heat release, thereby enabling quantitative modeling of the kinetics throughout the combustion event using a simple assumption of specified pressure and enthalpy. We measure temperature and OH radical time-histories during ethylene-oxygen combustion behind reflected shock waves in a constrained reaction volume and verify that the results can be accurately modeled using a detailed mechanism and a specified pressure and enthalpy constraint. © 2013 The Combustion Institute.
Box-constrained Total-variation Image Restoration with Automatic Parameter Estimation
Institute of Scientific and Technical Information of China (English)
HE Chuan; HU Chang-Hua; ZHANG Wei; SHI Biao
2014-01-01
The box constraints in image restoration have been arousing great attention, since the pixels of a digital image can attain only a finite number of values in a given dynamic range. This paper studies the box-constrained total-variation (TV) image restoration problem with automatic regularization parameter estimation. By adopting the variable splitting technique and introducing some auxiliary variables, the box-constrained TV minimization problem is decomposed into a sequence of subproblems which are easier to solve. Then the alternating direction method (ADM) is adopted to solve the related subproblems. By means of Morozov0s discrepancy principle, the regularization parameter can be updated adaptively in a closed form in each iteration. Image restoration experiments indicate that with our strategies, more accurate solutions are achieved, especially for image with high percentage of pixel values lying on the boundary of the given dynamic range.
A gradient-constrained morphological filtering algorithm for airborne LiDAR
Li, Yong; Wu, Huayi; Xu, Hanwei; An, Ru; Xu, Jia; He, Qisheng
2013-12-01
This paper presents a novel gradient-constrained morphological filtering algorithm for bare-earth extraction from light detection and ranging (LiDAR) data. Based on the gradient feature points determined by morphological half-gradients, the potential object points are located prior to filtering. Innovative gradient-constrained morphological operations are created, which are executed only for the potential object points. Compared with the traditional morphological operations, the new operations reduce many meaningless operations for object removal and consequently decrease the possibility of losing terrain to a great extent. The applicability and reliability of this algorithm are demonstrated by evaluating the filtering performance for fifteen sample datasets in various complex scenes. The proposed algorithm is found to achieve a high level of accuracy compared with eight other filtering algorithms tested by the International Society for Photogrammetry and Remote Sensing. Moreover, the proposed algorithm has minimal error oscillation for different landscapes, which is important for quality control of digital terrain model generation.
A constrained backpropagation approach for the adaptive solution of partial differential equations.
Rudd, Keith; Di Muro, Gianluca; Ferrari, Silvia
2014-03-01
This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.
Constrained regulator problem for linear uncertain systems: control of a pH process
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The regulator problem for linear uncertain continuous-time systems having control constraints is considered. Necessary and sufficient conditions of positive invariance of polyhedral domains are extended to the case of continuous-time uncertain systems. Robust constrained regulators are then derived. An application to the control of pH in a stirred tank is then presented. First, the uncertainty in the pH process is evaluated from first-principle models, then the design of a robust constrained regulator is presented. Simulation results show that this control law is easy to implement and that robust asymptotic stability and control admissibility are guaranteed.
Constraining dark matter through 21-cm observations
Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.
2007-05-01
Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.
Constrained Metric Learning by Permutation Inducing Isometries.
Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle
2016-01-01
The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.
Constraining the halo mass function with observations
Castro, Tiago; Marra, Valerio; Quartin, Miguel
2016-12-01
The abundances of dark matter haloes in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behaviour through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper, we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of Type Ia supernovae. Our results show that Dark Energy Survey is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS (Javalambre-Physics of the Accelerated Universe Astrophysical Survey) can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the HMF. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.
Changes in epistemic frameworks: Random or constrained?
Directory of Open Access Journals (Sweden)
Ananka Loubser
2012-11-01
Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three.
Constraining the braneworld with gravitational wave observations
McWilliams, Sean T
2009-01-01
Braneworld models containing large extra dimensions may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model, the asymptotic AdS radius of curvature of the extra dimension supports a single bound state of the massless graviton on the brane, thereby avoiding gross violations of Newton's law. However, one possible consequence of this model is an enormous increase in the amount of Hawking radiation emitted by black holes. This consequence has been employed by other authors to attempt to constrain the AdS radius of curvature through the observation of black holes. I present two novel methods for constraining the AdS curvature. The first method results from the effect of this enhanced mass loss on the event rate for extreme mass ratio inspirals (EMRIs) detected by the space-based LISA interferometer. The second method results from the observation of an individually resolvable galactic black hole binary with LISA. I show that the ...
Constraining the Braking Indices of Magnetars
Gao, Z F; Wang, N; Yuan, J P; Peng, Q H; Du, Y J
2015-01-01
Due to the lack of long term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index $n$ of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of $n$ of nine magnetars with SNRs, and find that they cluster in a range of $1\\sim$41. Six magnetars have smaller braking indices of $13$ for other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with $13$ within the updated magneto-thermal evolution models. We point out that there could be some connections between the magnetar's anti-glitch event and its braking index, and the magnitude of $n$ should be taken into account when explaining the event. Although the constrained range of the magnetars' braking indices is tentative, our method provides an effective way to constrain the magnetars' braking indices if th...
Constraining the mass of the Local Group
Carlesi, Edoardo; Sorce, Jenny G; Gottlöber, Stefan
2016-01-01
The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter which cannot be directly observed. To meet this end the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the LCDM model that is used to set up the simulations and a LG model,which encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted onto the Cosmicflows-2 database of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different $v_{tan}$ choices affect the peak mass values up to a factor of 2, and change mass ratios of $M_{M31}$ to $M_{M...
Nonstationary sparsity-constrained seismic deconvolution
Sun, Xue-Kai; Sam, Zandong Sun; Xie, Hui-Wen
2014-12-01
The Robinson convolution model is mainly restricted by three inappropriate assumptions, i.e., statistically white reflectivity, minimum-phase wavelet, and stationarity. Modern reflectivity inversion methods (e.g., sparsity-constrained deconvolution) generally attempt to suppress the problems associated with the first two assumptions but often ignore that seismic traces are nonstationary signals, which undermines the basic assumption of unchanging wavelet in reflectivity inversion. Through tests on reflectivity series, we confirm the effects of nonstationarity on reflectivity estimation and the loss of significant information, especially in deep layers. To overcome the problems caused by nonstationarity, we propose a nonstationary convolutional model, and then use the attenuation curve in log spectra to detect and correct the influences of nonstationarity. We use Gabor deconvolution to handle nonstationarity and sparsity-constrained deconvolution to separating reflectivity and wavelet. The combination of the two deconvolution methods effectively handles nonstationarity and greatly reduces the problems associated with the unreasonable assumptions regarding reflectivity and wavelet. Using marine seismic data, we show that correcting nonstationarity helps recover subtle reflectivity information and enhances the characterization of details with respect to the geological record.
Constraining MHD Disk-Winds with X-ray Absorbers
Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.
2014-01-01
From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.
Constraining the Milky Way potential using the dynamical kinematic substructures
Directory of Open Access Journals (Sweden)
Antoja T.
2012-02-01
Full Text Available We present a method to constrain the potential of the non-axisymmetric components of the Galaxy using the kinematics of stars in the solar neighborhood. The basic premise is that dynamical substructures in phase-space (i.e. due to the bar and/or spiral arms are associated with families of periodic or irregular orbits, which may be easily identified in orbital frequency space. We use the “observed” positions and velocities of stars as initial conditions for orbital integrations in a variety of gravitational potentials. We then compute their characteristic frequencies, and study the structure present in the frequency maps. We find that the distribution of dynamical substructures in velocity- and frequency-space is best preserved when the integrations are performed in the “true” gravitational potential.
Correspondence between constrained transport and vector potential methods for magnetohydrodynamics
Mocz, Philip
2017-01-01
We show that one can formulate second-order field- and flux-interpolated constrained transport/central difference (CT/CD) type methods as cell-centered magnetic vector potential schemes. We introduce four vector potential CTA/CDA schemes - three of which correspond to CT/CD methods of Tóth (2000) [1] and one of which is a new simple flux-CT-like scheme - where the centroidal vector potential is the primal update variable. These algorithms conserve a discretization of the ∇ ṡ B = 0 condition to machine precision and may be combined with shock-capturing Godunov type base schemes for magnetohydrodynamics. Recasting CT in terms of a centroidal vector potential allows for some simple generalizations of divergence-preserving methods to unstructured meshes, and potentially new directions to generalize CT schemes to higher-order.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
Mocz, Philip; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-01-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code Arepo. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this co...
Predictive Terminal Guidance With Tuning of Prediction Horizon & Constrained Control .
Directory of Open Access Journals (Sweden)
S. E. Talole
2000-07-01
Full Text Available Continvojs time-predictive control approach is employed to formulate an output tracking nonlinear, optimal, terminal guidance ,law for re-entry vehicles. The notable features of this formulation are that the system equations are not linearised and the evaluation of the guidanceequations does not need the information of vehicle parameters, such as drag and mass. The formulation allows to impose the physical constrains on the control inputs, i..e. on the demanded lateral acceleliations through a saturation mapping and the controls are obtained using a fixed pointiteration algorithm which converges typically in a few iterations. Further, a simple method of tuning the prediction horizon needed in the guidance equations is presented. Numerical simulations show that the guidance law achieves almost zero terminal errors in all states despite large errors in initial Conditions.
Stabilizing model predictive control for constrained nonlinear distributed delay systems.
Mahboobi Esfanjani, R; Nikravesh, S K Y
2011-04-01
In this paper, a model predictive control scheme with guaranteed closed-loop asymptotic stability is proposed for a class of constrained nonlinear time-delay systems with discrete and distributed delays. A suitable terminal cost functional and also an appropriate terminal region are utilized to achieve asymptotic stability. To determine the terminal cost, a locally asymptotically stabilizing controller is designed and an appropriate Lyapunov-Krasoskii functional of the locally stabilized system is employed as the terminal cost. Furthermore, an invariant set for locally stabilized system which is established by using the Razumikhin Theorem is used as the terminal region. Simple conditions are derived to obtain terminal cost and terminal region in terms of Bilinear Matrix Inequalities. The method is illustrated by a numerical example.