WorldWideScience

Sample records for articulatorily constrained maximum

  1. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  2. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  3. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  4. Resource-constrained maximum network throughput on space networks

    Institute of Scientific and Technical Information of China (English)

    Yanling Xing; Ning Ge; Youzheng Wang

    2015-01-01

    This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.

  5. Exploring the Constrained Maximum Edge-weight Connected Graph Problem

    Institute of Scientific and Technical Information of China (English)

    Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen

    2009-01-01

    Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.

  6. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  7. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays.

    Science.gov (United States)

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  8. A MAXIMUM ENTROPY METHOD FOR CONSTRAINED SEMI-INFINITEPROGRAMMING PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    ZHOU Guanglu; WANG Changyu; SHI Zhenjun; SUN Qingying

    1999-01-01

    This paper presents a new method, called the maximum entropy method,for solving semi-infinite programming problems, in which thesemi-infinite programming problem is approximated by one with a singleconstraint. The convergence properties for this method are discussed.Numerical examples are given to show the high effciency of thealgorithm.

  9. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    Science.gov (United States)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  10. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  11. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    Science.gov (United States)

    Nguyen, Q. H.; Choi, S. B.

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel-Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given.

  12. The maximum glitch observed in a pulsar systematically constrains its mass

    CERN Document Server

    Pizzochero, Pierre; Haskell, Brynmor; Seveso, Stefano

    2016-01-01

    Pulsar glitches, sudden jumps in frequency in otherwise steadily spinning down radio pulsars, offer a unique glimpse into the superfluid interior of neutron stars. The exact trigger of these events remains, however, elusive and this has hampered attempts to use glitch observations to constrain fundamental physics. In this paper we propose a new method to measure the mass of glitching pulsars, using observations of the maximum glitch recorded in a star, together with state of the art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. Studying systematically all the presently observed large glitchers, we find an inverse correlation between size of the maximum glitch and the pulsar mass. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the equation of state of dense matter in neutron star interiors.

  13. Maximum entropy production: can it be used to constrain conceptual hydrological models?

    Directory of Open Access Journals (Sweden)

    M. C. Westhoff

    2013-08-01

    Full Text Available In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in literature, generally little guidance has been given on how to apply the principle. The aim of this paper is to use the maximum power principle – which is closely related to MEP – to constrain parameters of a simple conceptual (bucket model. Although, we had to conclude that conceptual bucket models could not be constrained with respect to maximum power, this study sheds more light on how to use and how not to use the principle. Several of these issues have been correctly applied in other studies, but have not been explained or discussed as such. While other studies were based on resistance formulations, where the quantity to be optimized is a linear function of the resistance to be identified, our study shows that the approach also works for formulations that are only linear in the log-transformed space. Moreover, we showed that parameters describing process thresholds or influencing boundary conditions cannot be constrained. We furthermore conclude that, in order to apply the principle correctly, the model should be (1 physically based; i.e. fluxes should be defined as a gradient divided by a resistance, (2 the optimized flux should have a feedback on the gradient; i.e. the influence of boundary conditions on gradients should be minimal, (3 the temporal scale of the model should be chosen in such a way that the parameter that is optimized is constant over the modelling period, (4 only when the correct feedbacks are implemented the fluxes can be correctly optimized and (5 there should be a trade-off between two or more fluxes. Although our application of the maximum power principle did

  14. Regularization of constrained maximum likelihood iterative algorithms by means of statistical stopping rule

    CERN Document Server

    Benvenuto, Federico

    2012-01-01

    In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those ...

  15. Modeling words with subword units in an articulatorily constrained speech recognition algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1997-11-20

    The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.

  16. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1976-01-01

    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  17. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    Science.gov (United States)

    Iden, Sascha C.; Peters, Andre; Durner, Wolfgang

    2015-11-01

    The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.

  18. Reconstructing the Last Glacial Maximum ice sheet in the Weddell Sea embayment, Antarctica, using numerical modelling constrained by field evidence

    Science.gov (United States)

    Le Brocq, A. M.; Bentley, M. J.; Hubbard, A.; Fogwill, C. J.; Sugden, D. E.; Whitehouse, P. L.

    2011-09-01

    The Weddell Sea Embayment (WSE) sector of the Antarctic ice sheet has been suggested as a potential source for a period of rapid sea-level rise - Meltwater Pulse 1a, a 20 m rise in ˜500 years. Previous modelling attempts have predicted an extensive grounding line advance in the WSE, to the continental shelf break, leading to a large equivalent sea-level contribution for the sector. A range of recent field evidence suggests that the ice sheet elevation change in the WSE at the Last Glacial Maximum (LGM) is less than previously thought. This paper describes and discusses an ice flow modelling derived reconstruction of the LGM ice sheet in the WSE, constrained by the recent field evidence. The ice flow model reconstructions suggest that an ice sheet consistent with the field evidence does not support grounding line advance to the continental shelf break. A range of modelled ice sheet surfaces are instead produced, with different grounding line locations derived from a novel grounding line advance scheme. The ice sheet reconstructions which best fit the field constraints lead to a range of equivalent eustatic sea-level estimates between approximately 1.4 and 3 m for this sector. This paper describes the modelling procedure in detail, considers the assumptions and limitations associated with the modelling approach, and how the uncertainty may impact on the eustatic sea-level equivalent results for the WSE.

  19. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources.

    Science.gov (United States)

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J

    2016-03-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.

  20. Reconstructing the contribution of the Weddell Sea sector, Antarctica, to sea level rise since the last glacial maximum, using numerical modelling constrained by field evidence.

    Science.gov (United States)

    Le Brocq, A.; Bentley, M.; Hubbard, A.; Fogwill, C.; Sugden, D.

    2008-12-01

    A numerical ice sheet model constrained by recent field evidence is employed to reconstruct the Last Glacial Maximum (LGM) ice sheet in the Weddell Sea Embayment (WSE). Previous modelling attempts have predicted an extensive grounding line advance (to the continental shelf break) in the WSE, leading to a large equivalent sea level contribution for the sector. The sector has therefore been considered as a potential source for a period of rapid sea level rise (MWP1a, 20 m rise in ~500 years). Recent field evidence suggests that the elevation change in the Ellsworth mountains at the LGM is lower than previously thought (~400 m). The numerical model applied in this paper suggests that a 400 m thicker ice sheet at the LGM does not support such an extensive grounding line advance. A range of ice sheet surfaces, resulting from different grounding line locations, lead to an equivalent sea level estimate of 1 - 3 m for this sector. It is therefore unlikely that the sector made a significant contribution to sea level rise since the LGM, and in particular to MWP1a. The reduced ice sheet size also has implications for the correction of GRACE data, from which Antarctic mass balance calculations have been derived.

  1. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario.

    Science.gov (United States)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-07-07

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.

  2. Maximum Fidelity

    CERN Document Server

    Kinkhabwala, Ali

    2013-01-01

    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  3. Technical Note: Methods for interval constrained atmospheric inversion of methane

    Directory of Open Access Journals (Sweden)

    J. Tang

    2010-08-01

    Full Text Available Three interval constrained methods, including the interval constrained Kalman smoother, the interval constrained maximum likelihood ensemble smoother and the interval constrained ensemble Kalman smoother are developed to conduct inversions of atmospheric trace gas methane (CH4. The negative values of fluxes in an unconstrained inversion are avoided in the constrained inversion. In a multi-year inversion experiment using pseudo observations derived from a forward transport simulation with known fluxes, the interval constrained fixed-lag Kalman smoother presents the best results, followed by the interval constrained fixed-lag ensemble Kalman smoother and the interval constrained maximum likelihood ensemble Kalman smoother. Consistent uncertainties are obtained for the posterior fluxes with these three methods. This study provides alternatives of the variable transform method to deal with interval constraints in atmospheric inversions.

  4. The inverse maximum dynamic flow problem

    Institute of Scientific and Technical Information of China (English)

    BAGHERIAN; Mehri

    2010-01-01

    We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

  5. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  6. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  7. Constrained optimization using CODEQ

    Energy Technology Data Exchange (ETDEWEB)

    Omran, Mahamed G.H. [Department of Computer Science, Gulf University for Science and Technology, P.O. Box 7207, Hawally 32093 (Kuwait)], E-mail: omran.m@gust.edu.kw; Salman, Ayed [Computer Engineering Department, Kuwait University, P.O. Box 5969, Safat 13060 (Kuwait)], E-mail: ayed@eng.kuniv.edu.kw

    2009-10-30

    Many real-world optimization problems are constrained problems that involve equality and inequality constraints. CODEQ is a new, parameter-free meta-heuristic algorithm that is a hybrid of concepts from chaotic search, opposition-based learning, differential evolution and quantum mechanics. The performance of the proposed approach when applied to five constrained benchmark problems is investigated and compared with other approaches proposed in the literature. The experiments conducted show that CODEQ provides excellent results with the added advantage of no parameter tuning.

  8. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  9. Maximum Autocorrelation Factorial Kriging

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...

  10. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.;

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...

  11. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...... R-tree based algorithms for computing RCJ, by exploiting the characteristics of the geometric constraint. We evaluate experimentally the efficiency of our methods on synthetic and real spatial datasets. The results show that our proposed algorithms scale well with the data size and have robust...

  12. Constraining entropic cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Koivisto, Tomi S. [Institute for Theoretical Physics and the Spinoza Institute, Utrecht University, Leuvenlaan 4, Postbus 80.195, 3508 TD Utrecht (Netherlands); Mota, David F. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zumalacárregui, Miguel, E-mail: t.s.koivisto@uu.nl, E-mail: d.f.mota@astro.uio.no, E-mail: miguelzuma@icc.ub.edu [Institute of Cosmos Sciences (ICC-IEEC), University of Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)

    2011-02-01

    It has been recently proposed that the interpretation of gravity as an emergent, entropic phenomenon might have nontrivial implications to cosmology. Here several such approaches are investigated and the underlying assumptions that must be made in order to constrain them by the BBN, SneIa, BAO and CMB data are clarified. Present models of inflation or dark energy are ruled out by the data. Constraints are derived on phenomenological parameterizations of modified Friedmann equations and some features of entropic scenarios regarding the growth of perturbations, the no-go theorem for entropic inflation and the possible violation of the Bekenstein bound for the entropy of the Universe are discussed and clarified.

  13. Lectures on Constrained Systems

    CERN Document Server

    Date, Ghanashyam

    2010-01-01

    These lecture notes were prepared as a basic introduction to the theory of constrained systems which is how the fundamental forces of nature appear in their Hamiltonian formulation. Only a working knowledge of Lagrangian and Hamiltonian formulation of mechanics is assumed. These notes are based on the set of eight lectures given at the {\\em Refresher Course for College Teachers} held at IMSc during May-June, 2005. These are submitted to the arxiv for an easy access to a wider body of students.

  14. Symmetrically Constrained Compositions

    CERN Document Server

    Beck, Matthias; Lee, Sunyoung; Savage, Carla D

    2009-01-01

    Given integers $a_1, a_2, ..., a_n$, with $a_1 + a_2 + ... + a_n \\geq 1$, a symmetrically constrained composition $\\lambda_1 + lambda_2 + ... + lambda_n = M$ of $M$ into $n$ nonnegative parts is one that satisfies each of the the $n!$ constraints ${\\sum_{i=1}^n a_i \\lambda_{\\pi(i)} \\geq 0 : \\pi \\in S_n}$. We show how to compute the generating function of these compositions, combining methods from partition theory, permutation statistics, and lattice-point enumeration.

  15. Space Constrained Dynamic Covering

    CERN Document Server

    Antonellis, Ioannis; Dughmi, Shaddin

    2009-01-01

    In this paper, we identify a fundamental algorithmic problem that we term space-constrained dynamic covering (SCDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SCDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X={1,2,...,n} and I={S_1, ..., S_m}, S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SCDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), sometimes even as little as O((m+n)polylog(mn)) space. The goal of SCDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We present algorithms a...

  16. Maximum Autocorrelation Factorial Kriging

    OpenAIRE

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  17. Density constrained TDHF

    CERN Document Server

    Oberacker, V E

    2015-01-01

    In this manuscript we provide an outline of the numerical methods used in implementing the density constrained time-dependent Hartree-Fock (DC-TDHF) method and provide a few examples of its application to nuclear fusion. In this approach, dynamic microscopic calculations are carried out on a three-dimensional lattice and there are no adjustable parameters, the only input is the Skyrme effective NN interaction. After a review of the DC-TDHF theory and the numerical methods, we present results for heavy-ion potentials $V(R)$, coordinate-dependent mass parameters $M(R)$, and precompound excitation energies $E^{*}(R)$ for a variety of heavy-ion reactions. Using fusion barrier penetrabilities, we calculate total fusion cross sections $\\sigma(E_\\mathrm{c.m.})$ for reactions between both stable and neutron-rich nuclei. We also determine capture cross sections for hot fusion reactions leading to the formation of superheavy elements.

  18. Constrained Sparse Galerkin Regression

    CERN Document Server

    Loiseau, Jean-Christophe

    2016-01-01

    In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.

  19. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  20. Early Cosmology Constrained

    CERN Document Server

    Verde, Licia; Pigozzo, Cassio; Heavens, Alan F; Jimenez, Raul

    2016-01-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the $\\Lambda$CDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95\\% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter $\\Omega_{\\rm MR} < 0.006$ and extra radiation parameterised as extra effective neutrino species $2.3 < N_{\\rm eff} < 3.2$ when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond $\\Lambda$CDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way ...

  1. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  2. Maximum information photoelectron metrology

    CERN Document Server

    Hockett, P; Wollenhaupt, M; Baumert, T

    2015-01-01

    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  3. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  4. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  5. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  8. Generalized Maximum Entropy

    Science.gov (United States)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  9. Constraining the Anisotropic Expansion of Universe

    CERN Document Server

    Cai, Rong-Gen; Tang, Bo; Tuo, Zhong-Liang

    2013-01-01

    We study the possibly existing anisotropy in the accelerating expansion Universe with the Union2 Type Ia supernovae data and Gamma-ray burst data. We construct a direction-dependent dark energy model and constrain the anisotropy direction and strength of modulation. We find that the maximum anisotropic deviation direction is $(l,\\,b)=(126^{\\circ},\\,13^{\\circ})$ (or equivalently $(l,\\,b)=(306^{\\circ},\\,-13^{\\circ})$), and the anisotropy level is $g_0=0.030_{+0.010}^{-0.030}$ (obtained using Union2 data, at $1\\sigma$ confidence level). Our results do not show strong evidence for the anisotropic dark energy model. We also discuss potential methods that may distinguish the peculiar velocity field from the anisotropic dark energy model.

  10. Constrained Optimization of Discontinuous Systems

    OpenAIRE

    Y.M. Ermoliev; V.I. Norkin

    1996-01-01

    In this paper we extend the results of Ermoliev, Norkin and Wets [8] and Ermoliev and Norkin [7] to the case of constrained discontinuous optimization problems. In contrast to [7] the attention is concentrated on the proof of general optimality conditions for problems with nonconvex feasible sets. Easily implementable random search technique is proposed.

  11. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...

  12. Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model

    Directory of Open Access Journals (Sweden)

    Martin Mihelich

    2014-02-01

    Full Text Available The asymmetric simple exclusion process (ASEP has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.

  13. Enablers and constrainers to participation

    DEFF Research Database (Denmark)

    Desjardins, Richard; Milana, Marcella

    2007-01-01

    This paper briefly reviews some of evidence on participation patterns in Nordic countries and some of the defining parameters that may explain the observations. This is done in a comparative perspective by contrasting results from the 2003 Eurobarometer data between Nordic countries and a handful...... of non-Nordic countries. An emphasis is placed on the constraining and enabling elements to participation and how these may explain why certain groups participate more or less than others. A central question of interest to this paper is to what extent does (can) government intervention interact...... with constraining and enabling elements so as to raise participation among otherwise disadvantaged groups. To begin addressing this question, consideration is given to different types of constraints and different types of policies. These are brought together within a broad demand and supply framework, so...

  14. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  15. Constrained Multiobjective Biogeography Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hongwei Mo

    2014-01-01

    Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.

  16. Constrained multiobjective biogeography optimization algorithm.

    Science.gov (United States)

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.

  17. Impulsive differential inclusions with constrains

    Directory of Open Access Journals (Sweden)

    Tzanko Donchev

    2006-05-01

    Full Text Available In the paper, we study weak invariance of differential inclusions with non-fixed time impulses under compactness type assumptions. When the right-hand side is one sided Lipschitz an extension of the well known relaxation theorem is proved. In this case also necessary and sufficient condition for strong invariance of upper semi continuous systems are obtained. Some properties of the solution set of impulsive system (without constrains in appropriate topology are investigated.

  18. Constrained traffic regulation in variable-length packet networks

    Science.gov (United States)

    Karumanchi, Ashok; Varadarajan, Sridhar; Rao, Kalyan; Talabattula, Srinivas

    2004-02-01

    The availability of high bandwidth in optical networks coupled with the evolution of applications such as video on demand and telemedicine create a clear need for providing quality-of-service (QoS) guarantees in optical networks. Proliferation of the IP-over-WDM model in these networks requires the network to provide QoS guarantees for variable-length packets. In this context, we address the problem of constrained traffic regulation--traffic regulation with buffer and delay constraints--in variable-length packet networks. We use the filtering theory under max-plus (max, +) algebra to address this problem. For a constrained traffic-regulation problem with maximum tolerable delay and maximum buffer size, the traffic regulator that generates g-regular output traffic minimizing the number of discarded packets is a concatenation of the f clipper and the minimal g regulator. f is a function of g, maximum delay, and maximum buffer size. The f clipper is a bufferless device, which drops the packets as necessary so that its output is f regular. The minimal g regulator is a buffered device that delays packets as necessary so that its output is g regular. The g regulator is a linear shift-invariant filter with impulse response g, under the (max, +) algebra.

  19. On Constrained Facility Location Problems

    Institute of Scientific and Technical Information of China (English)

    Wei-Lin Li; Peng Zhang; Da-Ming Zhu

    2008-01-01

    Given m facilities each with an opening cost, n demands, and distance between every demand and facility,the Facility Location problem finds a solution which opens some facilities to connect every demand to an opened facility such that the total cost of the solution is minimized. The k-Facility Location problem further requires that the number of opened facilities is at most k, where k is a parameter given in the instance of the problem. We consider the Facility Location problems satisfying that for every demand the ratio of the longest distance to facilities and the shortest distance to facilities is at most w, where w is a predefined constant. Using the local search approach with scaling technique and error control technique, for any arbitrarily small constant ∈ > 0, we give a polynomial-time approximation algorithm for the ω-constrained Facility Location problem with approximation ratio 1 + √ω + 1 + ∈, which significantly improves the previous best known ratio (ω + 1)/α for some 1 ≤α≤ 2, and a polynomial-time approximation algorithm for the ω-constrained κ-Facility Location problem with approximation ratio ω + 1 + ∈. On the aspect of approximation hardness, we prove that unless NP (C) DTIME(nO(loglogn)), the ω-constrained Facility Location problem cannot be approximated within 1 + √ω-1,which slightly improves the previous best known hardness result 1.243 + 0.316 ln(ω - 1). The experimental results on the standard test instances of Facility Location problem show that our algorithm also has good performance in practice.

  20. Constrained and regularized system identification

    Directory of Open Access Journals (Sweden)

    Tor A. Johansen

    1998-04-01

    Full Text Available Prior knowledge can be introduced into system identification problems in terms of constraints on the parameter space, or regularizing penalty functions in a prediction error criterion. The contribution of this work is mainly an extension of the well known FPE (Final Production Error statistic to the case when the system identification problem is constrained and contains a regularization penalty. The FPECR statistic (Final Production Error with Constraints and Regularization is of potential interest as a criterion for selection of both regularization parameters and structural parameters such as order.

  1. Method of constrained global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Altschuler, E.L.; Williams, T.J.; Ratner, E.R.; Dowla, F.; Wooten, F. (Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551 (United States) Department of Applied Physics, Stanford University, Stanford, California 94305 (United States) Department of Applied Science, University of California, Davis/Livermore, P.O. Box 808, Livermore, California 94551 (United States))

    1994-04-25

    We present a new method for optimization: constrained global optimization (CGO). CGO iteratively uses a Glauber spin flip probability and the Metropolis algorithm. The spin flip probability allows changing only the values of variables contributing excessively to the function to be minimized. We illustrate CGO with two problems---Thomson's problem of finding the minimum-energy configuration of unit charges on a spherical surface, and a problem of assigning offices---for which CGO finds better minima than other methods. We think CGO will apply to a wide class of optimization problems.

  2. Constraining continuous rainfall simulations for derived design flood estimation

    Science.gov (United States)

    Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.

    2016-11-01

    Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.

  3. A constrained model for MSMA

    Energy Technology Data Exchange (ETDEWEB)

    Capella, Antonio [Instituto de Matematicas, Universidad Nacional Autonoma de Mexico (Mexico); Mueller, Stefan [Hausdorff Center for Mathematics and Institute for Applied Mathematics, Universitaet Bonn (Germany); Otto, Felix [Max Planck Institute for Mathematics in the Sciences, Leipzig (Germany)

    2012-08-15

    A mathematical description of transformation processes in magnetic shape memory alloys (MSMA) under applied stresses and external magnetic fields needs a combination of micromagnetics and continuum elasticity theory. In this note, we discuss the so-called constrained theories, i.e., models where the state described by the pair (linear strain, magnetization) is at every point of the sample constrained to assume one of only finitely many values (that reflect the material symmetries). Furthermore, we focus on large body limits, i.e., models that are formulated in terms of (local) averages of a microstructured state, as the one proposed by DeSimone and James. We argue that the effect of an interfacial energy associated with the twin boundaries survives on the level of the large body limit in form of a (local) rigidity of twins. This leads to an alternative (i.e., with respect to reference 1) large body limit. The new model has the advantage of qualitatively explaining the occurrence of a microstructure with charged magnetic walls, as observed in SPP experiments in reference 2. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. Constraining Cosmic Evolution of Type Ia Supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  5. Constraining CO emission estimates using atmospheric observations

    Science.gov (United States)

    Hooghiemstra, P. B.

    2012-06-01

    (mainly CO from oxidation of NMVOCs) that are 185 Tg CO/yr higher compared to the stations-only inversion. Second, MOPITT-only derived biomass burning emissions are reduced with respect to the prior which is in contrast to previous (inverse) modeling studies. Finally, MOPITT derived total emissions are significantly higher for South America and Africa compared to the stations-only inversion. This is likely due to a positive bias in the MOPITT V4 product. This bias is also apparent from validation with surface stations and ground-truth FTIR columns. In the final study we present the first inverse modeling study to estimate CO emissions constrained by both surface (NOAA) and satellite (MOPITT) observations using a bias correction scheme. This approach leads to the identification of a positive bias of maximum 5 ppb in MOPITT column-averaged CO mixing ratios in the remote Southern Hemisphere (SH). The 4D-Var system is used to estimate CO emissions over South America in the period 2006-2010 and to analyze the interannual variability (IAV) of these emissions. We infer robust, high spatial resolution CO emission estimates that show slightly smaller IAV due to fires compared to the Global Fire Emissions Database (GFED3) prior emissions. Moreover, CO emissions probably associated with pre-harvest burning of sugar cane plantations are underestimated in current inventories by 50-100%.

  6. Maximum-entropy probability distributions under Lp-norm constraints

    Science.gov (United States)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  7. iBGP and Constrained Connectivity

    CERN Document Server

    Dinitz, Michael

    2011-01-01

    We initiate the theoretical study of the problem of minimizing the size of an iBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than $\\Omega(\\log n)$ and provide approximation algorithms with ratio $\\tilde{O}(\\sqrt{n})$. In addition, we give a slightly worse $\\tilde{O}(n^{2/3})$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems. The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph $G=(V,E)$, and for every pair of vertices $u,v \\in V$ we are given a set $S(u,...

  8. Constrained Allocation Flux Balance Analysis

    CERN Document Server

    Mori, Matteo; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferr...

  9. Exploring constrained quantum control landscapes

    Science.gov (United States)

    Moore, Katharine W.; Rabitz, Herschel

    2012-10-01

    The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of

  10. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  11. The evolution of maximum body size of terrestrial mammals.

    Science.gov (United States)

    Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D

    2010-11-26

    The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.

  12. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  13. Maximum margin Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  14. Stress-constrained topology optimization for compliant mechanism design

    DEFF Research Database (Denmark)

    de Leon, Daniel M.; Alexandersen, Joe; Jun, Jun S.;

    2015-01-01

    This article presents an application of stress-constrained topology optimization to compliant mechanism design. An output displacement maximization formulation is used, together with the SIMP approach and a projection method to ensure convergence to nearly discrete designs. The maximum stress...... is approximated using a normalized version of the commonly-used p-norm of the effective von Mises stresses. The usual problems associated with topology optimization for compliant mechanism design: one-node and/or intermediate density hinges are alleviated by the stress constraint. However, it is also shown...

  15. Positive Scattering Cross Sections using Constrained Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-09-27

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

  16. Abolishing the maximum tension principle

    Directory of Open Access Journals (Sweden)

    Mariusz P. Da̧browski

    2015-09-01

    Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  17. Constrained Allocation Flux Balance Analysis

    Science.gov (United States)

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  18. Constraining QGP properties with CHIMERA

    Science.gov (United States)

    Garishvili, Irakli; Abelev, Betty; Cheng, Michael; Glenn, Andrew; Soltz, Ron

    2011-10-01

    Understanding essential properties of strongly interacting matter is arguably the most important goal of the relativistic heavy-ion programs both at RHIC and the LHC. In particular, constraining observables such as ratio of shear viscosity to entropy density, η/s, initial temperature, Tinit, and energy density is of critical importance. For this purpose we have developed CHIMERA, Comprehensive Heavy Ion Model Reporting and Evaluation Algorithm. CHIMERA is designed to facilitate global statistical comparison of results from our multi-stage hydrodynamics/hadron cascade model of heavy ion collisions to the key soft observables (HBT, elliptic flow, spectra) measured at RHIC and the LHC. Within this framework the data representing multiple different measurements from different experiments are compiled into single format. One of the unique features of CHIMERA is, that in addition to taking into account statistical errors, it also treats different types of systematic uncertainties. The hydrodynamics/hadron cascade model used in the framework incorporates different initial state conditions, pre-equilibrium flow, the UVH2+1 viscous hydro model, Cooper-Frye freezeout, and the UrQMD hadronic cascade model. The sensitivity of the observables to the equation of state (EoS) is explored using several EoS's in the hydrodynamic evolution. The latest results from CHIMERA, including data from the LHC, will be presented.

  19. Gyrification from constrained cortical expansion

    CERN Document Server

    Tallinen, Tuomas; Biggins, John S; Mahadevan, L

    2015-01-01

    The exterior of the mammalian brain - the cerebral cortex - has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highl...

  20. Constraining the Europa Neutral Torus

    Science.gov (United States)

    Smith, Howard T.; Mitchell, Donald; mauk, Barry; Johnson, Robert E.; clark, george

    2016-10-01

    "Neutral tori" consist of neutral particles that usually co-orbit along with their source forming a toroidal (or partial toroidal) feature around the planet. The distribution and composition of these features can often provide important, if not unique, insight into magnetospheric particles sources, mechanisms and dynamics. However, these features can often be difficult to directly detect. One innovative method for detecting neutral tori is by observing Energetic Neutral Atoms (ENAs) that are generally considered produced as a result of charge exchange interactions between charged and neutral particles.Mauk et al. (2003) reported the detection of a Europa neutral particle torus using ENA observations. The presence of a Europa torus has extremely large implications for upcoming missions to Jupiter as well as understanding possible activity at this moon and providing critical insight into what lies beneath the surface of this icy ocean world. However, ENAs can also be produced as a result of charge exchange interactions between two ionized particles and in that case cannot be used to infer the presence of neutral particle population. Thus, a detailed examination of all possible source interactions must be considered before one can confirm that likely original source population of these ENA images is actually a Europa neutral particle torus. For this talk, we examine the viability that the Mauk et al. (2003) observations were actually generated from a neutral torus emanating from Europa as opposed to charge particle interactions with plasma originating from Io. These results help constrain such a torus as well as Europa source processes.

  1. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  2. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    A mesoscale numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element method for calculating stresses. The sintering behavior of a sample constrained by a rigid substrate...

  3. Determination of optimal gains for constrained controllers

    Energy Technology Data Exchange (ETDEWEB)

    Kwan, C.M.; Mestha, L.K.

    1993-08-01

    In this report, we consider the determination of optimal gains, with respect to a certain performance index, for state feedback controllers where some elements in the gain matrix are constrained to be zero. Two iterative schemes for systematically finding the constrained gain matrix are presented. An example is included to demonstrate the procedures.

  4. Efficient caching for constrained skyline queries

    DEFF Research Database (Denmark)

    Mortensen, Michael Lind; Chester, Sean; Assent, Ira;

    2015-01-01

    Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints ...

  5. Maximum Genus of Strong Embeddings

    Institute of Scientific and Technical Information of China (English)

    Er-ling Wei; Yan-pei Liu; Han Ren

    2003-01-01

    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  6. D(Maximum)=P(Argmaximum)

    CERN Document Server

    Remizov, Ivan D

    2009-01-01

    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  7. The Testability of Maximum Magnitude

    Science.gov (United States)

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.

    2012-12-01

    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  8. Alternative Multiview Maximum Entropy Discrimination.

    Science.gov (United States)

    Chao, Guoqing; Sun, Shiliang

    2016-07-01

    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

  9. Constraining Cosmological Models with Different Observations

    Science.gov (United States)

    Wei, J. J.

    2016-07-01

    With the observations of Type Ia supernovae (SNe Ia), scientists discovered that the Universe is experiencing an accelerated expansion, and then revealed the existence of dark energy in 1998. Since the amazing discovery, cosmology has became a hot topic in the physical research field. Cosmology is a subject that strongly depends on the astronomical observations. Therefore, constraining different cosmological models with all kinds of observations is one of the most important research works in the modern cosmology. The goal of this thesis is to investigate cosmology using the latest observations. The observations include SNe Ia, Type Ic Super Luminous supernovae (SLSN Ic), Gamma-ray bursts (GRBs), angular diameter distance of galaxy cluster, strong gravitational lensing, and age measurements of old passive galaxies, etc. In Chapter 1, we briefly review the research background of cosmology, and introduce some cosmological models. Then we summarize the progress on cosmology from all kinds of observations in more details. In Chapter 2, we present the results of our studies on the supernova cosmology. The main difficulty with the use of SNe Ia as standard candles is that one must optimize three or four nuisance parameters characterizing SN luminosities simultaneously with the parameters of an expansion model of the Universe. We have confirmed that one should optimize all of the parameters by carrying out the method of maximum likelihood estimation in any situation where the parameters include an unknown intrinsic dispersion. The commonly used method, which estimates the dispersion by requiring the reduced χ^{2} to equal unity, does not take into account all possible variances among the parameters. We carry out such a comparison of the standard ΛCDM cosmology and the R_{h}=ct Universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. Moreover, it is quite evident that SLSNe Ic may be useful

  10. Cacti with maximum Kirchhoff index

    OpenAIRE

    Wang, Wen-Rui; Pan, Xiang-Feng

    2015-01-01

    The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

  11. Diabatic constrained relativistic mean field approach

    CERN Document Server

    L"u, H F; Meng, J

    2005-01-01

    A diabatic (configuration-fixed) constrained approach to calculate the potential energy surface (PES) of the nucleus is developed in the relativistic mean field model. The potential energy surfaces of $^{208}$Pb obtained from both adiabatic and diabatic constrained approaches are investigated and compared. The diabatic constrained approach enables one to decompose the segmented PES obtained in usual adiabatic approaches into separate parts uniquely characterized by different configurations, to define the single particle orbits at very deformed region by their quantum numbers, and to obtain several well defined deformed excited states which can hardly be expected from the adiabatic PES's.

  12. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower....... The important concept of entropy is introduced. In general, the entropy of a constrained field is not readily computable, but we give a series of upper and lower bounds based on one dimensional techniques. We discuss the use of a Pickard probability model for constrained fields. The novelty lies in using...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  13. Constrained crosstalk resistant adaptive noise canceller

    Science.gov (United States)

    Parsa, V.; Parker, P.

    1994-08-01

    The performance of an adaptive noise canceller (ANC) is sensitive to the presence of signal `crosstalk' in the reference channel. The authors propose a novel approach to crosstalk resistant adaptive noise cancellation, namely the constrained crosstalk resistant adaptive noise canceller (CCRANC). The theoretical analysis of the CCRANC along with the constrained algorithm is presented. The performance of the CCRANC in recovering somatosensory evoked potentials (SEPs) from myoelectric interference is then evaluated through simulations.

  14. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...... to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of intermediate wealth and/or consumption....

  15. CANONICAL FORMULATION OF NONHOLONOMIC CONSTRAINED SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    GUO YONG-XIN; YU YING; HUANG HAI-JUN

    2001-01-01

    Based on the Ehresmann connection theory and symplectic geometry, the canonical formulation of nonholonomic constrained mechanical systems is described. Following the Lagrangian formulation of the constrained system, the Hamiltonian formulation is given by Legendre transformation. The Poisson bracket defined by an anti-symmetric tensor does not satisfy the Jacobi identity for the nonintegrability of nonholonomic constraints. The constraint manifold can admit symplectic submanifold for some cases, in which the Lie algebraic structure exists.

  16. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  17. Constraining pion interactions at very high energies by cosmic ray data

    CERN Document Server

    Ostapchenko, Sergey

    2016-01-01

    We demonstrate that a substantial part of the present uncertainties in model predictions for the average maximum depth of cosmic ray-induced extensive air showers is related to very high energy pion-air collisions. Our analysis shows that the position of the maximum of the muon production profile in air showers is strongly sensitive to the properties of such interactions. Therefore, the measurements of the maximal muon production depth by cosmic ray experiments provide a unique opportunity to constrain the treatment of pion-air interactions at very high energies and to reduce thereby model-related uncertainties for the shower maximum depth.

  18. The strong maximum principle revisited

    Science.gov (United States)

    Pucci, Patrizia; Serrin, James

    In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.

  19. Constraining gravity with hadron physics: neutron stars, modified gravity and gravitational waves

    Science.gov (United States)

    Llanes-Estrada, Felipe J.

    2017-03-01

    The finding of Gravitational Waves (GW) by the aLIGO scientific and VIRGO collaborations opens opportunities to better test and understand strong interactions, both nuclear-hadronic and gravitational. Assuming General Relativity holds, one can constrain hadron physics at a neutron star. But precise knowledge of the Equation of State and transport properties in hadron matter can also be used to constrain the theory of gravity itself. I review a couple of these opportunities in the context of modified f (R) gravity, the maximum mass of neutron stars, and progress in the Equation of State of neutron matter from the chiral effective field theory of QCD.

  20. Constraining gravity with hadron physics: neutron stars, modified gravity and gravitational waves

    CERN Document Server

    Llanes-Estrada, Felipe J

    2016-01-01

    The finding of Gravitational Waves by the aLIGO scientific and VIRGO collaborations opens opportunities to better test and understand strong interactions, both nuclear-hadronic and gravitational. Assuming General Relativity holds, one can constrain hadron physics at a neutron star. But precise knowledge of the Equation of State and transport properties in hadron matter can also be used to constrain the theory of gravity itself. I review a couple of these opportunities in the context of modified f(R) gravity, the maximum mass of neutron stars, and progress in the Equation of State of neutron matter from the chiral effective field theory of QCD.

  1. Maximum Matchings via Glauber Dynamics

    CERN Document Server

    Jindal, Anant; Pal, Manjish

    2011-01-01

    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  2. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    F W Giacobbe

    2003-03-01

    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  3. Maximum entropy production in daisyworld

    Science.gov (United States)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  4. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino;

    2010-01-01

    £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...... of the Markov random field defined by the 2-D constraint is estimated to be (upper bounded by) 0.8570 bits/symbol using the iterative technique of Belief Propagation on 2 £ 2 finite lattices. Based on combinatorial bounding techniques the maximum entropy for the constraint was determined to be 0.848.......This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...

  5. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  6. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  7. Vibration control of cylindrical shells using active constrained layer damping

    Science.gov (United States)

    Ray, Manas C.; Chen, Tung-Huei; Baz, Amr M.

    1997-05-01

    The fundamentals of controlling the structural vibration of cylindrical shells treated with active constrained layer damping (ACLD) treatments are presented. The effectiveness of the ACLD treatments in enhancing the damping characteristics of thin cylindrical shells is demonstrated theoretically and experimentally. A finite element model (FEM) is developed to describe the dynamic interaction between the shells and the ACLD treatments. The FEM is used to predict the natural frequencies and the modal loss factors of shells which are partially treated with patches of the ACLD treatments. The predictions of the FEM are validated experimentally using stainless steel cylinders which are 20.32 cm in diameter, 30.4 cm in length and 0.05 cm in thickness. The cylinders are treated with ACLD patches of different configurations in order to target single or multi-modes of lobar vibrations. The ACLD patches used are made of DYAD 606 visco-elastic layer which is sandwiched between two layers of PVDF piezo-electric films. Vibration attenuations of 85% are obtained with maximum control voltage of 40 volts. Such attenuations are attributed to the effectiveness of the ACLD treatment in increasing the modal damping ratios by about a factor of four over those of conventional passive constrained layer damping (PCLD) treatments. The obtained results suggest the potential of the ACLD treatments in controlling the vibration of cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles.

  8. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  9. Towards weakly constrained double field theory

    Science.gov (United States)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  10. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki;

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  11. Towards Weakly Constrained Double Field Theory

    CERN Document Server

    Lee, Kanghoon

    2015-01-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X- ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  12. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  13. Constrained instanton and black hole creation

    Institute of Scientific and Technical Information of China (English)

    WU Zhongchao; XU Donghui

    2004-01-01

    A gravitational instanton is considered as the seed for the creation of a universe. However, there exist too few instantons. To include many interesting phenomena in the framework of quantum cosmology, the concept of constrained gravitational instanton is inevitable. In this paper we show how a primordial black hole is created from a constrained instanton. The quantum creation of a generic black hole in the closed or open background is completely resolved. The relation of the creation scenario with gravitational thermodynamics and topology is discussed.

  14. An approximate, maximum terminal velocity descent to a point

    Energy Technology Data Exchange (ETDEWEB)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximate physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.

  15. Modeling the Microstructural Evolution During Constrained Sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    2015-01-01

    A numerical model able to simulate solid-state constrained sintering is presented. The model couples an existing kinetic Monte Carlo model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...

  16. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.;

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...

  17. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini

    2014-01-01

    A numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...

  18. CONSTRAINED RATIONAL CUBIC SPLINE AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Qi Duan; Huan-ling Zhang; Xiang Lai; Nan Xie; Fu-hua (Frank) Cheng

    2001-01-01

    In this paper, a kind of rational cubic interpolation functionwith linear denominator is constructed. The constrained interpolation with constraint on shape of the interpolating curves and on the second-order derivative of the interpolating function is studied by using this interpolation, and as the consequent result, the convex interpolation conditions have been derived.

  19. PRICING AND HEDGING OPTION UNDER PORTFOLIO CONSTRAINED

    Institute of Scientific and Technical Information of China (English)

    魏刚; 陈世平

    2001-01-01

    The authors employ convex analysis and stochastic control approach to study the question of hedging contingent claims with portfolio constrained to take values in a given closed, convex subset of RK, and extend the results of Gianmario Tessitore and Jerzy Zabczyk[6] on pricing options in multiasset and multinominal model.

  20. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...

  1. Conjugate variables in continuous maximum-entropy inference.

    Science.gov (United States)

    Davis, Sergio; Gutiérrez, Gonzalo

    2012-11-01

    For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.

  2. Double-sided fuzzy chance-constrained linear fractional programming approach for water resources management

    Science.gov (United States)

    Cui, Liang; Li, Yongping; Huang, Guohe

    2016-06-01

    A double-sided fuzzy chance-constrained fractional programming (DFCFP) method is developed for planning water resources management under uncertainty. In DFCFP the system marginal benefit per unit of input under uncertainty can also be balanced. The DFCFP is applied to a real case of water resources management in the Zhangweinan River Basin, China. The results show that the amounts of water allocated to the two cities (Anyang and Handan) would be different under minimum and maximum reliability degrees. It was found that the marginal benefit of the system solved by DFCFP is bigger than the system benefit under the minimum and maximum reliability degrees, which not only improve economic efficiency in the mass, but also remedy water deficiency. Compared with the traditional double-sided fuzzy chance-constrained programming (DFCP) method, the solutions obtained from DFCFP are significantly higher, and the DFCFP has advantages in water conservation.

  3. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  4. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  5. CONSTRAINED SPECTRAL CLUSTERING FOR IMAGE SEGMENTATION

    Science.gov (United States)

    Sourati, Jamshid; Brooks, Dana H.; Dy, Jennifer G.; Erdogmus, Deniz

    2013-01-01

    Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate the algorithm on general and medical images to show that the segmentation results will improve using constrained clustering even if one works with a subset of pixels. Furthermore, this happens more efficiently when pixels to be labeled are selected actively. PMID:24466500

  6. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  7. A constrained supersymmetric left-right model

    CERN Document Server

    Hirsch, Martin; Opferkuch, Toby; Porod, Werner; Staub, Florian

    2016-01-01

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model's capability to explain current anomalies observed at the LHC.

  8. Global marine primary production constrains fisheries catches.

    Science.gov (United States)

    Chassot, Emmanuel; Bonhommeau, Sylvain; Dulvy, Nicholas K; Mélin, Frédéric; Watson, Reg; Gascuel, Didier; Le Pape, Olivier

    2010-04-01

    Primary production must constrain the amount of fish and invertebrates available to expanding fisheries; however the degree of limitation has only been demonstrated at regional scales to date. Here we show that phytoplanktonic primary production, estimated from an ocean-colour satellite (SeaWiFS), is related to global fisheries catches at the scale of Large Marine Ecosystems, while accounting for temperature and ecological factors such as ecosystem size and type, species richness, animal body size, and the degree and nature of fisheries exploitation. Indeed we show that global fisheries catches since 1950 have been increasingly constrained by the amount of primary production. The primary production appropriated by current global fisheries is 17-112% higher than that appropriated by sustainable fisheries. Global primary production appears to be declining, in some part due to climate variability and change, with consequences for the near future fisheries catches.

  9. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  10. Cosmogenic photons strongly constrain UHECR source models

    Science.gov (United States)

    van Vliet, Arjen

    2017-03-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  11. Doubly Constrained Robust Blind Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Xin Song

    2013-01-01

    Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.

  12. How peer-review constrains cognition

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2015-01-01

    Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far...... as ‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio......-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper’s own content...

  13. Capacity constrained assignment in spatial databases

    DEFF Research Database (Denmark)

    U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos;

    2008-01-01

    Given a point set P of customers (e.g., WiFi receivers) and a point set Q of service providers (e.g., wireless access points), where each q 2 Q has a capacity q.k, the capacity constrained assignment (CCA) is a matching M Q × P such that (i) each point q 2 Q (p 2 P) appears at most k times (at most...

  14. CONSTRAINED SPECTRAL CLUSTERING FOR IMAGE SEGMENTATION

    OpenAIRE

    Sourati, Jamshid; Brooks, Dana H.; Dy, Jennifer G.; Erdogmus, Deniz

    2012-01-01

    Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate ...

  15. Constrained simulation of the Bullet Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Lage, Craig; Farrar, Glennys, E-mail: csl336@nyu.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003 (United States)

    2014-06-01

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.

  16. Constraining neutron star matter with Quantum Chromodynamics

    CERN Document Server

    Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi

    2014-01-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.

  17. Constraining neutron star matter with quantum chromodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kurkela, Aleksi [Physics Department, Theory Unit, CERN, CH-1211 Genève 23 (Switzerland); Fraga, Eduardo S.; Schaffner-Bielich, Jürgen [Institute for Theoretical Physics, Goethe University, D-60438 Frankfurt am Main (Germany); Vuorinen, Aleksi [Department of Physics and Helsinki Institute of Physics, P.O. Box 64, FI-00014 University of Helsinki (Finland)

    2014-07-10

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount—or even presence—of quark matter inside the stars.

  18. Constraining Neutron Star Matter with Quantum Chromodynamics

    Science.gov (United States)

    Kurkela, Aleksi; Fraga, Eduardo S.; Schaffner-Bielich, Jürgen; Vuorinen, Aleksi

    2014-07-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount—or even presence—of quark matter inside the stars.

  19. Synthesis of constrained analogues of tryptophan

    Directory of Open Access Journals (Sweden)

    Elisabetta Rossi

    2015-10-01

    Full Text Available A Lewis acid-catalysed diastereoselective [4 + 2] cycloaddition of vinylindoles and methyl 2-acetamidoacrylate, leading to methyl 3-acetamido-1,2,3,4-tetrahydrocarbazole-3-carboxylate derivatives, is described. Treatment of the obtained cycloadducts under hydrolytic conditions results in the preparation of a small library of compounds bearing the free amino acid function at C-3 and pertaining to the class of constrained tryptophan analogues.

  20. Constraining RRc candidates using SDSS colours

    CERN Document Server

    Bányai, E; Molnár, L; Dobos, L; Szabó, R

    2016-01-01

    The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.

  1. Constraining Source Redshift Distributions with Gravitational Lensing

    CERN Document Server

    Wittman, D

    2012-01-01

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that ~40 massive ($\\sigma_v=1200$ km/s) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to ~11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as $N_{lens}^0.5$, but the improvement slows as one is forced to use lenses further ...

  2. Cosmicflows Constrained Local UniversE Simulations

    CERN Document Server

    Sorce, Jenny G; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M; Steinmetz, Matthias; Tully, R Brent; Pomarede, Daniel; Carlesi, Edoardo

    2015-01-01

    This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observatio...

  3. HCV management in resource-constrained countries.

    Science.gov (United States)

    Lim, Seng Gee

    2017-02-21

    With the arrival of all-oral directly acting antiviral (DAA) therapy with high cure rates, the promise of hepatitis C virus (HCV) eradication is within closer reach. The availability of generic DAAs has improved access to countries with constrained resources. However, therapy is only one component of the HCV care continuum, which is the framework for HCV management from identifying patients to cure. The large number of undiagnosed HCV cases is the biggest concern, and strategies to address this are needed, as risk factor screening is suboptimal, detecting HCV confirmation through either reflex HCV RNA screening or ideally a sensitive point of care test are needed. HCV notification (e.g., Australia) may improve diagnosis (proportion of HCV diagnosed is 75%) and may lead to benefits by increasing linkage to care, therapy and cure. Evaluations for cirrhosis using non-invasive markers are best done with a biological panel, but they are only moderately accurate. In resource-constrained settings, only generic HCV medications are available, and a combination of sofosbuvir, ribavirin, ledipasvir or daclatasvir provides sufficient efficacy for all genotypes, but this is likely to be replaced with pangenetypic regimens such as sofosbuvir/velpatasvir and glecaprevir/pibrentaasvir. In conclusion, HCV management in resource-constrained settings is challenging on multiple fronts because of the lack of infrastructure, facilities, trained manpower and equipment. However, it is still possible to make a significant impact towards HCV eradication through a concerted effort by individuals and national organisations with domain expertise in this area.

  4. An English language interface for constrained domains

    Science.gov (United States)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  5. Constrained and joint inversion on unstructured meshes

    Science.gov (United States)

    Doetsch, J.; Jordi, C.; Rieckh, V.; Guenther, T.; Schmelzbach, C.

    2015-12-01

    Unstructured meshes allow for inclusion of arbitrary surface topography, complex acquisition geometry and undulating geological interfaces in the inversion of geophysical data. This flexibility opens new opportunities for coupling different geophysical and hydrological data sets in constrained and joint inversions. For example, incorporating geological interfaces that have been derived from high-resolution geophysical data (e.g., ground penetrating radar) can add geological constraints to inversions of electrical resistivity data. These constraints can be critical for a hydrogeological interpretation of the inversion results. For time-lapse inversions of geophysical data, constraints can be derived from hydrological point measurements in boreholes, but it is difficult to include these hard constraints in the inversion of electrical resistivity monitoring data. Especially mesh density and the regularization footprint around the hydrological point measurements are important for an improved inversion compared to the unconstrained case. With the help of synthetic and field examples, we analyze how regularization and coupling operators should be chosen for time-lapse inversions constrained by point measurements and for joint inversions of geophysical data in order to take full advantage of the flexibility of unstructured meshes. For the case of constraining to point measurements, it is important to choose a regularization operator that extends beyond the neighboring cells and the uncertainty in the point measurements needs to be accounted for. For joint inversion, the choice of the regularization depends on the expected subsurface heterogeneity and the cell size of the parameter mesh.

  6. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  7. Comparison of selection schemes for evolutionary constrained optimization

    NARCIS (Netherlands)

    Kemenade, C.H.M. van

    1996-01-01

    Evolutionary algorithms simulate the process of evolution in order to evolve solutions to optimization problems. An interesting domain of application is to solve numerical constrained optimization problems. We introduce a simple constrained optimization problem with scalable dimension, adjustable co

  8. The recursion operator for a constrained CKP hierarchy

    CERN Document Server

    Li, Chuanzhong; He, Jingsong; Cheng, Yi

    2010-01-01

    This paper gives a recursion operator for a 1-constrained CKP hierarchy, and by the recursion operator it proves that the 1-constrained CKP hierarchy can be reduced to the mKdV hierarchy under condition $q=r$.

  9. The maximum rotation of a galactic disc

    NARCIS (Netherlands)

    Bottema, R

    1997-01-01

    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously

  10. 20 CFR 229.48 - Family maximum.

    Science.gov (United States)

    2010-04-01

    ... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

  11. Generalised maximum entropy and heterogeneous technologies

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    1999-01-01

    Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam

  12. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888.3780 Section 888.3780 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  13. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device intended... generic type of device includes prostheses that consist of a single flexible across-the-joint...

  14. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes....... Numerical results for the capacities are presented....

  15. Duality of Maximum Entropy and Minimum Divergence

    Directory of Open Access Journals (Sweden)

    Shinto Eguchi

    2014-06-01

    Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

  16. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  17. Utility Constrained Energy Minimization In Aloha Networks

    CERN Document Server

    Khodaian, Amir Mahdi; Talebi, Mohammad S

    2010-01-01

    In this paper we consider the issue of energy efficiency in random access networks and show that optimizing transmission probabilities of nodes can enhance network performance in terms of energy consumption and fairness. First, we propose a heuristic power control method that improves throughput, and then we model the Utility Constrained Energy Minimization (UCEM) problem in which the utility constraint takes into account single and multi node performance. UCEM is modeled as a convex optimization problem and Sequential Quadratic Programming (SQP) is used to find optimal transmission probabilities. Numerical results show that our method can achieve fairness, reduce energy consumption and enhance lifetime of such networks.

  18. Lifespan theorem for constrained surface diffusion flows

    CERN Document Server

    McCoy, James; Williams, Graham; 10.1007/s00209-010-0720-7

    2012-01-01

    We consider closed immersed hypersurfaces in $\\R^{3}$ and $\\R^4$ evolving by a class of constrained surface diffusion flows. Our result, similar to earlier results for the Willmore flow, gives both a positive lower bound on the time for which a smooth solution exists, and a small upper bound on a power of the total curvature during this time. By phrasing the theorem in terms of the concentration of curvature in the initial surface, our result holds for very general initial data and has applications to further development in asymptotic analysis for these flows.

  19. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...... of geographically distributed resources connected through an optical network work together for solving large problems. A number of heuristics are proposed along with an exact solution approach based on Dantzig-Wolfe decomposition. The latter has some performance difficulties while the heuristics solve all instances...

  20. Can Neutron stars constrain Dark Matter?

    DEFF Research Database (Denmark)

    Kouvaris, Christoforos; Tinyakov, Peter

    2010-01-01

    We argue that observations of old neutron stars can impose constraints on dark matter candidates even with very small elastic or inelastic cross section, and self-annihilation cross section. We find that old neutron stars close to the galactic center or in globular clusters can maintain a surface...... temperature that could in principle be detected. Due to their compactness, neutron stars can acrete WIMPs efficiently even if the WIMP-to-nucleon cross section obeys the current limits from direct dark matter searches, and therefore they could constrain a wide range of dark matter candidates....

  1. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  2. Charged particles constrained to a curved surface

    CERN Document Server

    Müller, Thomas

    2012-01-01

    We study the motion of charged particles constrained to arbitrary two-dimensional curved surfaces but interacting in three-dimensional space via the Coulomb potential. To speed-up the interaction calculations, we use the parallel compute capability of the Compute Unified Device Architecture (CUDA) of todays graphics boards. The particles and the curved surfaces are shown using the Open Graphics Library (OpenGL). The paper is intended to give graduate students, who have basic experiences with electrostatics and differential geometry, a deeper understanding in charged particle interactions and a short introduction how to handle a many particle system using parallel computing on a single home computer

  3. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both synthetic...

  4. Constraining Milky Way mass with Hypervelocity Stars

    CERN Document Server

    Fragione, Giacomo

    2016-01-01

    We show that hypervelocity stars (HVSs) ejected from the center of the Milky Way galaxy can be used to constrain the mass of its halo. The asymmetry in the radial velocity distribution of halo stars due to escaping HVSs depends on the halo potential (escape speed) as long as the round trip orbital time is shorter than the stellar lifetime. Adopting a characteristic HVS travel time of $300$ Myr, which corresponds to the average mass of main sequence HVSs ($3.2$ M$_{\\odot}$), we find that current data favors a mass for the Milky Way in the range $(1.2$-$1.7)\\times 10^{12} \\mathrm{M}_\\odot$.

  5. Energetic Materials Optimization via Constrained Search

    Science.gov (United States)

    2015-06-01

    space.18–20 LCAP and VP-DFT interpolate continuously between the Hamiltonians of various chemical species. Furthermore, recently an investigation into...Computational Chemistry Protocol All quantum- mechanical computations were performed using Gaussian 09.24 All geometries were preoptimized with B3LYP/3-21G under...via nonnegative Lagrange multipliers λ ∈ R3+ for the 3 constraints to the augmented Lagrangian function L(x, λ) := P (x) − λC(x) as a constrained min

  6. Constrained inflaton due to a complex scalar

    Energy Technology Data Exchange (ETDEWEB)

    Budhi, Romy H. S. [Physics Department, Gadjah Mada University,Yogyakarta 55281 (Indonesia); Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Kashiwase, Shoichi; Suematsu, Daijiro [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan)

    2015-09-14

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.

  7. QCD strings as constrained grassmannian sigma model

    CERN Document Server

    Viswanathan, K S; Viswanathan, K S; Parthasarathy, R

    1995-01-01

    We present calculations for the effective action of string world sheet in R3 and R4 utilizing its correspondence with the constrained Grassmannian sigma model. Minimal surfaces describe the dynamics of open strings while harmonic surfaces describe that of closed strings. The one-loop effective action for these are calculated with instanton and anti-instanton background, reprsenting N-string interactions at the tree level. The effective action is found to be the partition function of a classical modified Coulomb gas in the confining phase, with a dynamically generated mass gap.

  8. Weight-Constrained Minimum Spanning Tree Problem

    OpenAIRE

    Henn, Sebastian Tobias

    2007-01-01

    In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...

  9. The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms

    Institute of Scientific and Technical Information of China (English)

    HE Zhong-qiu; LI Dao-ben

    2003-01-01

    This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1~3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.

  10. Constraining the Braking Indices of Magnetars

    CERN Document Server

    Gao, Z F; Wang, N; Yuan, J P; Peng, Q H; Du, Y J

    2015-01-01

    Due to the lack of long term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index $n$ of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of $n$ of nine magnetars with SNRs, and find that they cluster in a range of $1\\sim$41. Six magnetars have smaller braking indices of $13$ for other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with $13$ within the updated magneto-thermal evolution models. We point out that there could be some connections between the magnetar's anti-glitch event and its braking index, and the magnitude of $n$ should be taken into account when explaining the event. Although the constrained range of the magnetars' braking indices is tentative, our method provides an effective way to constrain the magnetars' braking indices if th...

  11. Nonstationary sparsity-constrained seismic deconvolution

    Science.gov (United States)

    Sun, Xue-Kai; Sam, Zandong Sun; Xie, Hui-Wen

    2014-12-01

    The Robinson convolution model is mainly restricted by three inappropriate assumptions, i.e., statistically white reflectivity, minimum-phase wavelet, and stationarity. Modern reflectivity inversion methods (e.g., sparsity-constrained deconvolution) generally attempt to suppress the problems associated with the first two assumptions but often ignore that seismic traces are nonstationary signals, which undermines the basic assumption of unchanging wavelet in reflectivity inversion. Through tests on reflectivity series, we confirm the effects of nonstationarity on reflectivity estimation and the loss of significant information, especially in deep layers. To overcome the problems caused by nonstationarity, we propose a nonstationary convolutional model, and then use the attenuation curve in log spectra to detect and correct the influences of nonstationarity. We use Gabor deconvolution to handle nonstationarity and sparsity-constrained deconvolution to separating reflectivity and wavelet. The combination of the two deconvolution methods effectively handles nonstationarity and greatly reduces the problems associated with the unreasonable assumptions regarding reflectivity and wavelet. Using marine seismic data, we show that correcting nonstationarity helps recover subtle reflectivity information and enhances the characterization of details with respect to the geological record.

  12. Constraining the mass of the Local Group

    CERN Document Server

    Carlesi, Edoardo; Sorce, Jenny G; Gottlöber, Stefan

    2016-01-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter which cannot be directly observed. To meet this end the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the LCDM model that is used to set up the simulations and a LG model,which encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted onto the Cosmicflows-2 database of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different $v_{tan}$ choices affect the peak mass values up to a factor of 2, and change mass ratios of $M_{M31}$ to $M_{M...

  13. Constraining the halo mass function with observations

    Science.gov (United States)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-12-01

    The abundances of dark matter haloes in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behaviour through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper, we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of Type Ia supernovae. Our results show that Dark Energy Survey is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS (Javalambre-Physics of the Accelerated Universe Astrophysical Survey) can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the HMF. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  14. Constrained Metric Learning by Permutation Inducing Isometries.

    Science.gov (United States)

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  15. Constrained Simulation of the Bullet Cluster

    CERN Document Server

    Lage, Craig

    2013-01-01

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and 0.5 - 2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full 2D observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with NFW dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocitie...

  16. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  17. Constraining dark matter through 21-cm observations

    Science.gov (United States)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  18. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  19. Maximum Throughput in Multiple-Antenna Systems

    CERN Document Server

    Zamani, Mahdi

    2012-01-01

    The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...

  20. A dual method for maximum entropy restoration

    Science.gov (United States)

    Smith, C. B.

    1979-01-01

    A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.

  1. The maximum entropy technique. System's statistical description

    CERN Document Server

    Belashev, B Z

    2002-01-01

    The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered

  2. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  3. SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH

    Directory of Open Access Journals (Sweden)

    Pandya A M

    2011-04-01

    Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70

  4. A constrained-transport magnetohydrodynamics algorithm with near-spectral resolution

    CERN Document Server

    Maron, Jason; Oishi, Jeffrey

    2007-01-01

    Numerical simulations including magnetic fields have become important in many fields of astrophysics. Evolution of magnetic fields by the constrained transport algorithm preserves magnetic divergence to machine precision, and thus represents one preferred method for the inclusion of magnetic fields in simulations. We show that constrained transport can be implemented with volume-centered fields and hyperresistivity on a high-order finite difference stencil. Additionally, the finite-difference coefficients can be tuned to enhance high-wavenumber resolution. Similar techniques can be used for the interpolations required for dealiasing corrections at high wavenumber. Together, these measures yield an algorithm with a wavenumber resolution that approaches the theoretical maximum achieved by spectral algorithms. Because this algorithm uses finite differences instead of fast Fourier transforms, it runs faster and isn't restricted to periodic boundary conditions. Also, since the finite differences are spatially loca...

  5. Constraining the dark side with observations

    Energy Technology Data Exchange (ETDEWEB)

    Diez-Tejedor, Alberto [Dpto. de Fisica Teorica, Universidad del PaIs Vasco, Apdo. 644, 48080, Bilbao (Spain)

    2007-05-15

    The main purpose of this talk is to use the observational evidences pointing out to the existence of a dark side in the universe in order to infer some of the properties of the unseen material. We will work within the Unified Dark Matter models, in which both, Dark Matter and Dark Energy appear as the result of one unknown component. By modeling effectively this component with a classical scalar field minimally coupled to gravity, we will use the observations to constrain the form of the dark action. Using the flat rotation curves of spiral galaxies we will see that we are restringed to the use of purely kinetic actions, previously studied in cosmology by Scherrer. Finally we arrive to a simple action which fits both cosmological and astrophysical observations.

  6. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... pertaining to the communication model when the resources that can be reordered have binary values. The capacity result is valid under arbitrary error model in which errors in each resource (packet) occur independently. Inspired by the information—theoretic analysis, we have shown how to design practical......This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...

  7. Lagrange versus symplectic algorithm for constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Rothe, Heinz J; Rothe, Klaus D [Institut fuer Theoretische Physik - Universitaet Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany)

    2003-02-14

    The systematization of the purely Lagrangian approach to constrained systems in the form of an algorithm involves the iterative construction of a generalized Hessian matrix W taking a rectangular form. This Hessian will exhibit as many left zero modes as there are Lagrangian constraints in the theory. We apply this approach to a general Lagrangian in the first-order formulation and show how the seemingly overdetermined set of equations is solved for the velocities by suitably extending W to a rectangular matrix. As a byproduct we thereby demonstrate the equivalence of the Lagrangian approach to the traditional Dirac approach. By making use of this equivalence we show that a recently proposed symplectic algorithm does not necessarily reproduce the full constraint structure of the traditional Dirac algorithm.

  8. Constraining cosmology with pairwise velocity estimator

    CERN Document Server

    Ma, Yin-Zhe; He, Ping

    2015-01-01

    In this paper, we develop a full statistical method for the pairwise velocity estimator previously proposed, and apply Cosmicflows-2 catalogue to this method to constrain cosmology. We first calculate the covariance matrix for line-of-sight velocities for a given catalogue, and then simulate the mock full-sky surveys from it, and then calculate the variance for the pairwise velocity field. By applying the $8315$ independent galaxy samples and compressed $5224$ group samples from Cosmicflows-2 catalogue to this statistical method, we find that the joint constraint on $\\Omega^{0.6}_{\\rm m}h$ and $\\sigma_{8}$ is completely consistent with the WMAP 9-year and Planck 2015 best-fitting cosmology. Currently, there is no evidence for the modified gravity models or any dynamic dark energy models from this practice, and the error-bars need to be reduced in order to provide any concrete evidence against/to support $\\Lambda$CDM cosmology.

  9. Constrained Delaunay Triangulation for Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    D. Satyanarayana

    2008-01-01

    Full Text Available Geometric spanners can be used for efficient routing in wireless ad hoc networks. Computation of existing spanners for ad hoc networks primarily focused on geometric properties without considering network requirements. In this paper, we propose a new spanner called constrained Delaunay triangulation (CDT which considers both geometric properties and network requirements. The CDT is formed by introducing a small set of constraint edges into local Delaunay triangulation (LDel to reduce the number of hops between nodes in the network graph. We have simulated the CDT using network simulator (ns-2.28 and compared with Gabriel graph (GG, relative neighborhood graph (RNG, local Delaunay triangulation (LDel, and planarized local Delaunay triangulation (PLDel. The simulation results show that the minimum number of hops from source to destination is less than other spanners. We also observed the decrease in delay, jitter, and improvement in throughput.

  10. Constraining the Porosities of Interstellar Dust Grains

    CERN Document Server

    Heng, Kevin

    2009-01-01

    We present theoretical calculations of the X-ray scattering properties of porous grain aggregates with olivine monomers. The small and large angle scattering properties of these aggregates are governed by the global structure and substructure of the grain, respectively. We construct two diagnostics, R_X and T_X, based on the optical and X-ray properties of the aggregates, and apply them to a Chandra measurement of the dust halo around the Galactic binary GX13+1. Grain aggregates with porosities higher than about 0.55 are ruled out. Future high-precision observations of X-ray dust haloes together with detailed modeling of the X-ray scattering properties of porous grain mixtures will further constrain the presence of porous grain aggregates in a given dust population.

  11. Constrained sampling method for analytic continuation

    Science.gov (United States)

    Sandvik, Anders W.

    2016-12-01

    A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S =1 /2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.

  12. Topological impact of constrained fracture growth

    Directory of Open Access Journals (Sweden)

    Sigmund Mongstad Hope

    2015-09-01

    Full Text Available The topology of two discrete fracture network models is compared to investigate the impact of constrained fracture growth. In the Poissonian discrete fracture network model the fractures are assigned length, position and orientation independent of all other fractures, while in the mechanical discrete fracture network model the fractures grow and the growth can be limited by the presence of other fractures. The topology is found to be impacted by both the choice of model, as well as the choice of rules for the mechanical model. A significant difference is the degree mixing. In two dimensions the Poissonian model results in assortative networks, while the mechanical model results in disassortative networks. In three dimensions both models produce disassortative networks, but the disassortative mixing is strongest for the mechanical model.

  13. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  14. Constraining the Inflationary Equation of State

    CERN Document Server

    Ackerman, Lotty; Kundu, Sandipan; Sivanandam, Navin

    2010-01-01

    We explore possible constraints on the inflationary equation state: p=w\\rho. While w must be close to -1 for those modes that contribute to the observed power spectrum, for those modes currently out of experimental reach, the constraints on w are much weaker, with only w<-1/3 as an a priori requirement. We find, however, that limits on the reheat temperature and the inflationary energy scale constrain w further, though there is still ample parameter space for a vastly different (accelerating) equation of state between the end of quasi-de Sitter inflation and the beginning of the radiation-dominated era. In the event that such an epoch of acceleration could be observed, we review the consequences for the primordial power spectrum.

  15. Constraining the Cratering Chronology of Vesta

    CERN Document Server

    O'Brien, David P; Morbidelli, Alessandro; Bottke, William F; Schenk, Paul M; Russell, Christopher T; Raymond, Carol A

    2014-01-01

    Vesta has a complex cratering history, with ancient terrains as well as recent large impacts that have led to regional resurfacing. Crater counts can help constrain the relative ages of different units on Vesta's surface, but converting those crater counts to absolute ages requires a chronology function. We present a cratering chronology based on the best current models for the dynamical evolution of asteroid belt, and calibrate it to Vesta using the record of large craters on its surface. While uncertainties remain, our chronology function is broadly consistent with an ancient surface of Vesta as well as other constraints such as the bombardment history of the rest of the inner Solar System and the Ar-Ar age distribution of howardite, eucrite and diogenite (HED) meteorites from Vesta.

  16. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  17. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.

  18. Remote gaming on resource-constrained devices

    Science.gov (United States)

    Reza, Waazim; Kalva, Hari; Kaufman, Richard

    2010-08-01

    Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.

  19. Fluctuation theorem for constrained equilibrium systems

    Science.gov (United States)

    Gilbert, Thomas; Dorfman, J. Robert

    2006-02-01

    We discuss the fluctuation properties of equilibrium chaotic systems with constraints such as isokinetic and Nosé-Hoover thermostats. Although the dynamics of these systems does not typically preserve phase-space volumes, the average phase-space contraction rate vanishes, so that the stationary states are smooth. Nevertheless, finite-time averages of the phase-space contraction rate have nontrivial fluctuations which we show satisfy a simple version of the Gallavotti-Cohen fluctuation theorem, complementary to the usual fluctuation theorem for nonequilibrium stationary states and appropriate to constrained equilibrium states. Moreover, we show that these fluctuations are distributed according to a Gaussian curve for long enough times. Three different systems are considered here: namely, (i) a fluid composed of particles interacting with Lennard-Jones potentials, (ii) a harmonic oscillator with Nosé-Hoover thermostatting, and (iii) a simple hyperbolic two-dimensional map.

  20. A Path Algorithm for Constrained Estimation.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.

  1. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  2. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    Science.gov (United States)

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  3. Maximum magnitude earthquakes induced by fluid injection

    Science.gov (United States)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  4. Computing Rooted and Unrooted Maximum Consistent Supertrees

    CERN Document Server

    van Iersel, Leo

    2009-01-01

    A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.

  5. Maximum permissible voltage of YBCO coated conductors

    Science.gov (United States)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.

    2014-06-01

    Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  6. The maximum rotation of a galactic disc

    CERN Document Server

    Bottema, R

    1997-01-01

    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...

  7. Maximum likelihood based classification of electron tomographic data.

    Science.gov (United States)

    Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan

    2011-01-01

    Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.

  8. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky.......EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic...

  9. Maximum Multiflow in Wireless Network Coding

    CERN Document Server

    Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao

    2012-01-01

    In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.

  10. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  11. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...

  12. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  13. Maximum confidence measurements via probabilistic quantum cloning

    Institute of Scientific and Technical Information of China (English)

    Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu

    2013-01-01

    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.

  14. The Wiener maximum quadratic assignment problem

    CERN Document Server

    Cela, Eranda; Woeginger, Gerhard J

    2011-01-01

    We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.

  15. Eulerian Formulation of Spatially Constrained Elastic Rods

    Science.gov (United States)

    Huynen, Alexandre

    Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration

  16. MR constrained simultaneous reconstruction of activity and attenuation maps in brain TOF-PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mehranian, Abolfazl; Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland)

    2014-07-29

    The maximum likelihood estimation of attenuation and activity (MLAA) algorithm has been proposed to jointly estimate activity and attenuation from emission data only. Salomon et al employed the MLAA to estimate activity and attenuation from time-of-flight PET data with spatial MR prior information on attenuation. Recently, we proposed a novel algorithm to impose both spatial and statistical constraints on attenuation estimation within the MLAA algorithm using Dixon MR images and a constrained Gaussian mixture model (GMM). In this study, we compare the proposed algorithm with MLAA and MLAA-Salomon in brain TOF-PET/MR imaging.

  17. A new algorithm for degree-constrained minimum spanning tree based on the reduction technique

    Institute of Scientific and Technical Information of China (English)

    Aibing Ning; Liang Ma; Xiaohua Xiong

    2008-01-01

    The degree-constrained minimum spanning tree (DCMST) is an iVP-hard problem in graph theory. It consists of rinding a spanning tree whose vertices should not exceed some given maximum degrees and whose total edge length is minimal. In this paper, novel mathematical properties for DCMST are indicated which lead to a new reduction algorithm that can significantly reduce the size of the problem. Also an algorithm for DCMST to solve the smaller problem is presented which has been preprocessed by reduction algorithm.

  18. Sequential unconstrained minimization algorithms for constrained optimization

    Science.gov (United States)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  19. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  20. Instance Optimality of the Adaptive Maximum Strategy

    NARCIS (Netherlands)

    L. Diening; C. Kreuzer; R. Stevenson

    2016-01-01

    In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e

  1. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  2. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  3. Maximum phonation time: variability and reliability.

    Science.gov (United States)

    Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W

    2010-05-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.

  4. Maximum Phonation Time: Variability and Reliability

    NARCIS (Netherlands)

    R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings

    2010-01-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v

  5. Analysis of Photovoltaic Maximum Power Point Trackers

    Science.gov (United States)

    Veerachary, Mummadi

    The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.

  6. Weak Scale From the Maximum Entropy Principle

    CERN Document Server

    Hamada, Yuta; Kawana, Kiyoharu

    2015-01-01

    The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\

  7. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  8. Hard graphs for the maximum clique problem

    NARCIS (Netherlands)

    Hoede, Cornelis

    1988-01-01

    The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra

  9. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  10. Constraining the Properties of Cold Interstellar Clouds

    Science.gov (United States)

    Spraggs, Mary Elizabeth; Gibson, Steven J.

    2016-01-01

    Since the interstellar medium (ISM) plays an integral role in star formation and galactic structure, it is important to understand the evolution of clouds over time, including the processes of cooling and condensation that lead to the formation of new stars. This work aims to constrain and better understand the physical properties of the cold ISM by utilizing large surveys of neutral atomic hydrogen (HI) 21cm spectral line emission and absorption, carbon monoxide (CO) 2.6mm line emission, and multi-band infrared dust thermal continuum emission. We identify areas where the gas may be cooling and forming molecules using HI self-absorption (HISA), in which cold foreground HI absorbs radiation from warmer background HI emission.We are developing an algorithm that uses total gas column densities inferred from Planck and other FIR/sub-mm data in parallel with CO and HISA spectral line data to determine the gas temperature, density, molecular abundance, and other properties as functions of position. We can then map these properties to study their variation throughout an individual cloud as well as any dependencies on location or environment within the Galaxy.Funding for this work was provided by the National Science Foundation, the NASA Kentucky Space Grant Consortium, the WKU Ogden College of Science and Engineering, and the Carol Martin Gatton Academy for Mathematics and Science in Kentucky.

  11. Constrained Subjective Assessment of Student Learning

    Science.gov (United States)

    Saliu, Sokol

    2005-09-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a past course in microcomputer system design (MSD). CQA criteria were articulated in fuzzy terms and sets, and the assessment procedure was cast as a fuzzy inference rule base. An interactive graphic interface provided for transparent assessment, student "backwash," and support to the teacher when compiling the tests. Grade intervals, obtained from a departmental poll, were used to compile a fuzzy "grade" set. Assessment results were compared to those of a former standard method and to those of a modified version of it (but with fewer criteria). The three methods yielded similar results, supporting the application of CQA. The method improved assessment reliability by means of the consensus embedded in the fuzzy grade set, and improved assessment validity by integrating fuzzy criteria into the assessment procedure.

  12. Constraining the oblateness of Kepler planets

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Wei [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Huang, Chelsea X. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Zhou, George [Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston Creek, ACT 2611 (Australia); Lin, D. N. C., E-mail: weizhu@astronomy.ohio-state.edu [UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States)

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  13. Optimal performance of constrained control systems

    Science.gov (United States)

    Harvey, P. Scott, Jr.; Gavin, Henri P.; Scruggs, Jeffrey T.

    2012-08-01

    This paper presents a method to compute optimal open-loop trajectories for systems subject to state and control inequality constraints in which the cost function is quadratic and the state dynamics are linear. For the case in which inequality constraints are decentralized with respect to the controls, optimal Lagrange multipliers enforcing the inequality constraints may be found at any time through Pontryagin’s minimum principle. In so doing, the set of differential algebraic Euler-Lagrange equations is transformed into a nonlinear two-point boundary-value problem for states and costates whose solution meets the necessary conditions for optimality. The optimal performance of inequality constrained control systems is calculable, allowing for comparison to previous, sub-optimal solutions. The method is applied to the control of damping forces in a vibration isolation system subjected to constraints imposed by the physical implementation of a particular controllable damper. An outcome of this study is the best performance achievable given a particular objective, isolation system, and semi-active damper constraints.

  14. Constraining groundwater modeling with magnetic resonance soundings.

    Science.gov (United States)

    Boucher, Marie; Favreau, Guillaume; Nazoumou, Yahaya; Cappelaere, Bernard; Massuel, Sylvain; Legchenko, Anatoly

    2012-01-01

    Magnetic resonance sounding (MRS) is a noninvasive geophysical method that allows estimating the free water content and transmissivity of aquifers. In this article, the ability of MRS to improve the reliability of a numerical groundwater model is assessed. Thirty-five sites were investigated by MRS over a ∼5000 km(2) domain of the sedimentary Continental Terminal aquifer in SW Niger. Time domain electromagnetic soundings were jointly carried out to estimate the aquifer thickness. A groundwater model was previously built for this section of the aquifer and forced by the outputs from a distributed surface hydrology model, to simulate the observed long-term (1992 to 2003) rise in the water table. Uncertainty analysis had shown that independent estimates of the free water content and transmissivity values of the aquifer would facilitate cross-evaluation of the surface-water and groundwater models. MRS results indicate ranges for permeability (K = 1 × 10(-5) to 3 × 10(-4) m/s) and for free water content (w = 5% to 23% m(3) /m(3) ) narrowed by two orders of magnitude (K) and by ∼50% (w), respectively, compared to the ranges of permeability and specific yield values previously considered. These shorter parameter ranges result in a reduction in the model's equifinality (whereby multiple combinations of model's parameters are able to represent the same observed piezometric levels), allowing a better constrained estimate to be derived for net aquifer recharge (∼22 mm/year).

  15. Pressure compensated transducer system with constrained diaphragm

    Science.gov (United States)

    Percy, Joseph L.

    1992-08-01

    An acoustic source apparatus has an acoustic transducer that is enclosed in a substantially rigid and watertight enclosure to resist the pressure of water on the transducer and to seal the transducer from the water. The enclosure has an opening through which acoustic signals pass and over which is placed a resilient, expandable and substantially water-impermeable diaphragm. A net stiffens and strengthens the diaphragm as well as constrains the diaphragm from overexpansion or from migrating due to buoyancy forces. Pressurized gas, regulated at slightly above ambient pressure, is supplied to the enclosure and the diaphragm to compensate for underwater ambient pressures. Gas pressure regulated at above ambient pressure is used to selectively tune the pressure levels within the enclosure and diaphragm so that diaphragm resonance can be achieved. Controls are used to selectively fill, as well as vent the enclosure and diaphragm during system descent and ascent, respectively. A signal link is used to activate these controls and to provide the driving force for the acoustic transducer.

  16. Constraining the halo mass function with observations

    CERN Document Server

    Castro, Tiago; Quartin, Miguel

    2016-01-01

    The abundances of matter halos in the universe are described by the so-called halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints, while both Euclid and J-PAS can give constraints on the HMF parameters which are comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even mo...

  17. Constraining the Oblateness of Kepler Planets

    CERN Document Server

    Zhu, Wei; Zhou, George; Lin, D N C

    2014-01-01

    We use Kepler short cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside of the solar system, measuring an oblateness of $0.22 \\pm 0.11$ for the 18 $M_J$ mass brown dwarf Kepler 39b (KOI-423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI-686.01, and KOI-197.01 to be < 0.067, < 0.251, and < 0.186, respectively. Using the Q'-values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI-686.01 and KOI-197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI-423.01 (Kepler 39b) suggests that the Q'-value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants ...

  18. Constrained length minimum inductance gradient coil design.

    Science.gov (United States)

    Chronik, B A; Rutt, B K

    1998-02-01

    A gradient coil design algorithm capable of controlling the position of the homogeneous region of interest (ROI) with respect to the current-carrying wires is required for many advanced imaging and spectroscopy applications. A modified minimum inductance target field method that allows the placement of a set of constraints on the final current density is presented. This constrained current minimum inductance method is derived in the context of previous target field methods. Complete details are shown and all equations required for implementation of the algorithm are given. The method has been implemented on computer and applied to the design of both a 1:1 aspect ratio (length:diameter) central ROI and a 2:1 aspect ratio edge ROI gradient coil. The 1:1 design demonstrates that a general analytic method can be used to easily obtain very short gradient coil designs for use with specialized magnet systems. The edge gradient design demonstrates that designs that allow imaging of the neck region with a head sized gradient coil can be obtained, as well as other applications requiring edge-of-cylinder regions of uniformity.

  19. Constrained spheroids for prolonged hepatocyte culture.

    Science.gov (United States)

    Tong, Wen Hao; Fang, Yu; Yan, Jie; Hong, Xin; Hari Singh, Nisha; Wang, Shu Rui; Nugraha, Bramasta; Xia, Lei; Fong, Eliza Li Shan; Iliescu, Ciprian; Yu, Hanry

    2016-02-01

    Liver-specific functions in primary hepatocytes can be maintained over extended duration in vitro using spheroid culture. However, the undesired loss of cells over time is still a major unaddressed problem, which consequently generates large variations in downstream assays such as drug screening. In static culture, the turbulence generated by medium change can cause spheroids to detach from the culture substrate. Under perfusion, the momentum generated by Stokes force similarly results in spheroid detachment. To overcome this problem, we developed a Constrained Spheroids (CS) culture system that immobilizes spheroids between a glass coverslip and an ultra-thin porous Parylene C membrane, both surface-modified with poly(ethylene glycol) and galactose ligands for optimum spheroid formation and maintenance. In this configuration, cell loss was minimized even when perfusion was introduced. When compared to the standard collagen sandwich model, hepatocytes cultured as CS under perfusion exhibited significantly enhanced hepatocyte functions such as urea secretion, and CYP1A1 and CYP3A2 metabolic activity. We propose the use of the CS culture as an improved culture platform to current hepatocyte spheroid-based culture systems.

  20. Constraining cosmic isotropy with type Ia supernovae

    CERN Document Server

    Bengaly,, C A P; Alcaniz, J S

    2016-01-01

    We investigate the validity of the Cosmological Principle by constraining the cosmological parameters $H_0$ and $q_0$ through the celestial sphere. Our analyses are performed in a low-redshift regime in order to follow a model independent approach, using both Union2.1 and JLA Type Ia Supernovae (SNe) compilations. We find that the preferred direction of the $H_0$ parameter in the sky is consistent with the bulk flow motion of our local Universe in the Union2.1 case, while the $q_0$ directional analysis seem to be anti-correlated with the $H_0$ for both data sets. Furthermore, we test the consistency of these results with Monte Carlo (MC) realisations, finding that the anisotropy on both parameters are significant within $2-3\\sigma$ confidence level, albeit we find a significant correlation between the $H_0$ and $q_0$ mapping with the angular distribution of SNe from the JLA compilation. Therefore, we conclude that the detected anisotropies are either of local origin, or induced by the non-uniform celestial co...

  1. String Theory Origin of Constrained Multiplets

    CERN Document Server

    Kallosh, Renata; Wrase, Timm

    2016-01-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector $A_\\mu$, three complex scalars $\\phi^i$ and four 4d fermions $\\lambda^0$, $\\lambda^i$. These transform, in addition to the more familiar N=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear N=1 multiplets: four chiral multiplets $S$, $Y^i$ that satisfy $S^2=SY^i=0$ and contain the worldvolume fermions $\\lambda^0$ and $\\lambda^i$; and four chiral multiplets $W_\\alpha$, $H^i$ that satisfy $S W_\\alpha=0$ and $S \\bar D_{\\dot \\alpha} \\bar H^{\\bar \\imath}=0$ and contain the vector $A_\\mu$ and the scalars $\\phi^i$. We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet $\\Phi$ that satisfies $S(\\Phi-\\bar \\Phi)=0$, which is particularly interesting for in...

  2. Constraining New Physics with D meson decays

    Energy Technology Data Exchange (ETDEWEB)

    Barranco, J.; Delepine, D.; Gonzalez Macias, V. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Lopez-Lozano, L. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Área Académica de Matemáticas y Física, Universidad Autónoma del Estado de Hidalgo, Carr. Pachuca-Tulancingo Km. 4.5, C.P. 42184, Pachuca, HGO (Mexico)

    2014-04-04

    Latest Lattice results on D form factors evaluation from first principles show that the Standard Model (SM) branching ratios prediction for the leptonic D{sub s}→ℓν{sub ℓ} decays and the semileptonic SM branching ratios of the D{sup 0} and D{sup +} meson decays are in good agreement with the world average experimental measurements. It is possible to disprove New Physics hypothesis or find bounds over several models beyond the SM. Using the observed leptonic and semileptonic branching ratios for the D meson decays, we performed a combined analysis to constrain non-standard interactions which mediate the cs{sup ¯}→lν{sup ¯} transition. This is done either by a model-independent way through the corresponding Wilson coefficients or in a model-dependent way by finding the respective bounds over the relevant parameters for some models beyond the Standard Model. In particular, we obtain bounds for the Two Higgs Doublet Model Type-II and Type III, the Left–Right model, the Minimal Supersymmetric Standard Model with explicit R-parity violation and Leptoquarks. Finally, we estimate the transverse polarization of the lepton in the D{sup 0} decay and we found it can be as high as P{sub T}=0.23.

  3. Constraining Sterile Neutrinos Using Reactor Neutrino Experiments

    CERN Document Server

    Girardi, Ivan; Ohlsson, Tommy; Zhang, He; Zhou, Shun

    2014-01-01

    Models of neutrino mixing involving one or more sterile neutrinos have resurrected their importance in the light of recent cosmological data. In this case, reactor antineutrino experiments offer an ideal place to look for signatures of sterile neutrinos due to their impact on neutrino flavor transitions. In this work, we show that the high-precision data of the Daya Bay experi\\-ment constrain the 3+1 neutrino scenario imposing upper bounds on the relevant active-sterile mixing angle $\\sin^2 2 \\theta_{14} \\lesssim 0.06$ at 3$\\sigma$ confidence level for the mass-squared difference $\\Delta m^2_{41}$ in the range $(10^{-3},10^{-1}) \\, {\\rm eV^2}$. The latter bound can be improved by six years of running of the JUNO experiment, $\\sin^22\\theta_{14} \\lesssim 0.016$, although in the smaller mass range $ \\Delta m^2_{41} \\in (10^{-4} ,10^{-3}) \\, {\\rm eV}^2$. We have also investigated the impact of sterile neutrinos on precision measurements of the standard neutrino oscillation parameters $\\theta_{13}$ and $\\Delta m^2...

  4. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  5. Electricity in a Climate-Constrained World

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    After experiencing a historic drop in 2009, electricity generation reached a record high in 2010, confirming the close linkage between economic growth and electricity usage. Unfortunately, CO2 emissions from electricity have also resumed their growth: Electricity remains the single-largest source of CO2 emissions from energy, with 11.7 billion tonnes of CO2 released in 2010. The imperative to 'decarbonise' electricity and improve end-use efficiency remains essential to the global fight against climate change. The IEA’s Electricity in a Climate-Constrained World provides an authoritative resource on progress to date in this area, including statistics related to CO2 and the electricity sector across ten regions of the world (supply, end-use and capacity additions). It also presents topical analyses on the challenge of rapidly curbing CO2 emissions from electricity. Looking at policy instruments, it focuses on emissions trading in China, using energy efficiency to manage electricity supply crises and combining policy instruments for effective CO2 reductions. On regulatory issues, it asks whether deregulation can deliver decarbonisation and assesses the role of state-owned enterprises in emerging economies. And from technology perspectives, it explores the rise of new end-uses, the role of electricity storage, biomass use in Brazil, and the potential of carbon capture and storage for ‘negative emissions’ electricity supply.

  6. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  7. Should we still believe in constrained supersymmetry?

    CERN Document Server

    Balázs, Csaba; Carter, Daniel; Farmer, Benjamin; White, Martin

    2012-01-01

    We calculate Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with $M_0$ and $M_{1/2}$ below 2 TeV), reducing its posterior odds by a factor of approximately ...

  8. Wave speed in excitable random networks with spatially constrained connections.

    Directory of Open Access Journals (Sweden)

    Nikita Vladimirov

    Full Text Available Very fast oscillations (VFO in neocortex are widely observed before epileptic seizures, and there is growing evidence that they are caused by networks of pyramidal neurons connected by gap junctions between their axons. We are motivated by the spatio-temporal waves of activity recorded using electrocorticography (ECoG, and study the speed of activity propagation through a network of neurons axonally coupled by gap junctions. We simulate wave propagation by excitable cellular automata (CA on random (Erdös-Rényi networks of special type, with spatially constrained connections. From the cellular automaton model, we derive a mean field theory to predict wave propagation. The governing equation resolved by the Fisher-Kolmogorov PDE fails to describe wave speed. A new (hyperbolic PDE is suggested, which provides adequate wave speed v( that saturates with network degree , in agreement with intuitive expectations and CA simulations. We further show that the maximum length of connection is a much better predictor of the wave speed than the mean length. When tested in networks with various degree distributions, wave speeds are found to strongly depend on the ratio of network moments / rather than on mean degree , which is explained by general network theory. The wave speeds are strikingly similar in a diverse set of networks, including regular, Poisson, exponential and power law distributions, supporting our theory for various network topologies. Our results suggest practical predictions for networks of electrically coupled neurons, and our mean field method can be readily applied for a wide class of similar problems, such as spread of epidemics through spatial networks.

  9. Key Update Assistant for Resource-Constrained Networks

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2012-01-01

    Key update is a challenging task in resource-constrained networks where limitations in terms of computation, memory, and energy restrict the proper use of security mechanisms. We present an automated tool that computes the optimal key update strategy for any given resource-constrained network. We...

  10. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720 Section 888.3720 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  11. Solving constrained minimax problem via nonsmooth equations method

    Institute of Scientific and Technical Information of China (English)

    GUO Xiu-xia(郭修霞)

    2004-01-01

    A new nonsmooth equations model of constrained minimax problem was derived. The generalized Newton method was applied for solving this system of nonsmooth equations system. A new algorithm for solving constrained minimax problem was established. The local superlinear and quadratic convergences of the algorithm were discussed.

  12. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where M...

  13. The Pendulum: From Constrained Fall to the Concept of Potential

    Science.gov (United States)

    Bevilacqua, Fabio; Falomo, Lidia; Fregonese, Lucio; Giannetto, Enrico; Giudice, Franco; Mascheretti, Paolo

    2006-01-01

    Kuhn underlined the relevance of Galileo's gestalt switch in the interpretation of a swinging body from constrained fall to time metre. But the new interpretation did not eliminate the older one. The constrained fall, both in the motion of pendulums and along inclined planes, led Galileo to the law of free fall. Experimenting with physical…

  14. Logical consistency and sum-constrained linear models

    NARCIS (Netherlands)

    van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.

    2006-01-01

    A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a

  15. Nonparametric Maximum Entropy Estimation on Information Diagrams

    CERN Document Server

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn

    2016-01-01

    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  16. Zipf's law, power laws, and maximum entropy

    CERN Document Server

    Visser, Matt

    2012-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  17. Zipf's law, power laws and maximum entropy

    Science.gov (United States)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  18. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  19. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  20. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  1. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  2. Maximum privacy without coherence, zero-error

    Science.gov (United States)

    Leung, Debbie; Yu, Nengkun

    2016-09-01

    We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.

  3. Maximum Estrada Index of Bicyclic Graphs

    CERN Document Server

    Wang, Long; Wang, Yi

    2012-01-01

    Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.

  4. Tissue radiation response with maximum Tsallis entropy.

    Science.gov (United States)

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  5. A stochastic maximum principle via Malliavin calculus

    OpenAIRE

    Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo

    2008-01-01

    This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.

  6. Maximum-biomass prediction of homofermentative Lactobacillus.

    Science.gov (United States)

    Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei

    2016-07-01

    Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.

  7. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  8. The maximum rate of mammal evolution

    Science.gov (United States)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-03-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  9. The maximum rate of mammal evolution

    Science.gov (United States)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-01-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461

  10. Minimal Length, Friedmann Equations and Maximum Density

    CERN Document Server

    Awad, Adel

    2014-01-01

    Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...

  11. Automatic maximum entropy spectral reconstruction in NMR.

    Science.gov (United States)

    Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C

    2007-10-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.

  12. Maximum entropy analysis of cosmic ray composition

    CERN Document Server

    Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana

    2016-01-01

    We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...

  13. Maximum saliency bias in binocular fusion

    Science.gov (United States)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  14. Constraining projections of summer Arctic sea ice

    Directory of Open Access Journals (Sweden)

    F. Massonnet

    2012-11-01

    Full Text Available We examine the recent (1979–2010 and future (2011–2100 characteristics of the summer Arctic sea ice cover as simulated by 29 Earth system and general circulation models from the Coupled Model Intercomparison Project, phase 5 (CMIP5. As was the case with CMIP3, a large intermodel spread persists in the simulated summer sea ice losses over the 21st century for a given forcing scenario. The 1979–2010 sea ice extent, thickness distribution and volume characteristics of each CMIP5 model are discussed as potential constraints on the September sea ice extent (SSIE projections. Our results suggest first that the future changes in SSIE with respect to the 1979–2010 model SSIE are related in a complicated manner to the initial 1979–2010 sea ice model characteristics, due to the large diversity of the CMIP5 population: at a given time, some models are in an ice-free state while others are still on the track of ice loss. However, in phase plane plots (that do not consider the time as an independent variable, we show that the transition towards ice-free conditions is actually occurring in a very similar manner for all models. We also find that the year at which SSIE drops below a certain threshold is likely to be constrained by the present-day sea ice properties. In a second step, using several adequate 1979–2010 sea ice metrics, we effectively reduce the uncertainty as to when the Arctic could become nearly ice-free in summertime, the interval [2041, 2060] being our best estimate for a high climate forcing scenario.

  15. Constrained Inversion of Enceladus Interaction Observations

    Science.gov (United States)

    Herbert, Floyd; Khurana, K. K.

    2007-10-01

    Many detailed and sophisticated ab initio calculations of the electrodynamic interaction of Enceladus' plume with Saturn's corotating magnetospheric plasma flow have been computed. So far, however, all such calculations have been forward models, that assume the properties of the plume and compute perturbations to the magnetic (and in some cases, flow velocity) field. As a complement to the forward calculations, work reported here explores the inverse approach, of using simplified physical models of the interaction for computationally inverting the observed magnetic field perturbations of the interaction, in order to determine the cross-B-field conductivity distribution near Enceladus, and from that, the neutral gas distribution. Direct inversion of magnetic field observations to current systems is, of course, impossible, but adding the additional constraint of the interaction physics greatly reduces the non-uniqueness of the computed result. This approach was successfully used by Herbert (JGR 90:8241, 1985) to constrain the atmospheric distribution on Io and the Io torus mass density at the time of the Voyager encounter. Work so far has derived the expected result that there is a cone-shaped region of enhanced cross-field conductivity south of Enceladus, through which currents are driven by the motional electric field. That is, near Enceladus' south pole the cross-field currents are localized, but more widely spread at greater distance. This cross-field conductivity is presumably both pickup and collisional (Pedersen and Hall). Due to enforcement of current conservation, Alfven-wing-like currents north of the main part of the interaction region seem to close partly around Enceladus (assumed insulating) and also to continue northward with attenuated intensity, as though there were a tenuous global exosphere on Enceladus providing additional cross-field conductivity. FH thanks the NASA Outer Planets Research, Planetary Atmospheres, and Geospace Science Programs for

  16. Efficient solvers for soft-constrained MPC

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2015-01-01

    of this approach, it is shown that designing the reactive distillation process at the maximum driving force results in an optimal design in terms of controllability and operability. It is verified that the reactive distillation design option is less sensitive to the disturbances in the feed at the highest driving......In this work, integrated design and control of reactive distillation processes is presented. Simple graphical design methods that are similar in concept to non-reactive distillation processes are used, such as reactive McCabe-Thiele method and driving force approach. The methods are based...... on the element concept, which is used to translate a system of compounds into elements. The operation of the reactive distillation column at the highest driving force and other candidate points is analyzed through analytical solution as well as rigorous open-loop and closed-loop simulations. By application...

  17. POST-MAXIMUM NEAR-INFRARED SPECTRA OF SN 2014J

    DEFF Research Database (Denmark)

    Sand, D. J.; Hsiao, E. Y.; Banerjee, D. P. K.;

    2016-01-01

    We present near-infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The 17 NIR spectra span epochs from +15.3 to +92.5 days after B-band maximum light, while the JHK(s) photometry include epochs from -10 to +71 days. These. data are. used to constrain...... in our post-maximum spectra, with a rough hydrogen mass limit of less than or similar to 0.1 M-circle dot, which is consistent with previous limits in SN. 2014J from late-time optical spectra of the H alpha line. Nonetheless, the growing data. set of high-quality NIR spectra holds the promise of very...

  18. Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean

    Science.gov (United States)

    Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.

    2016-04-01

    Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).

  19. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  20. Dynamical maximum entropy approach to flocking

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  1. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....

  2. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  3. Kernel-based Maximum Entropy Clustering

    Institute of Scientific and Technical Information of China (English)

    JIANG Wei; QU Jiao; LI Benxi

    2007-01-01

    With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.

  4. Maximum entropy signal restoration with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Mastin, G.A.; Hanson, R.J.

    1988-05-01

    Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.

  5. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  6. COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT

    Directory of Open Access Journals (Sweden)

    PETRU SERGIU SERBAN

    2016-06-01

    Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.

  7. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  8. Zipf's law and maximum sustainable growth

    CERN Document Server

    Malevergne, Y; Sornette, D

    2010-01-01

    Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.

  9. Generation of Granulites Constrained by Thermal Modeling

    Science.gov (United States)

    Depine, G. V.; Andronicos, C. L.; Phipps-Morgan, J.

    2006-12-01

    The heat source needed to generate granulites facies metamorphism is still an unsolved problem in geology. There is a close spatial relationship between granulite terrains and extensive silicic plutonism, suggesting heat advection by melts is critical to their formation. To investigate the role of heat advection by melt in the generation of granulites we use numerical 1-D models which include the movement of melt from the base of the crust to the middle crust. The model is in part constrained by petrological observations from the Coast Plutonic Complex (CPC) in British Columbia, Canada at ~ 54° N where migmatite and granulite are widespread. The model takes into account time dependent heat conduction and advection of melts generated at the base of the crust. The model starts with a crust of 55 km, consistent with petrologic and geochemical data from the CPC. The lower crust is assumed to be amphibolite in composition, consistent with seismologic and geochemical constraints for the CPC. An initial geothermal gradient estimated from metamorphic P-T-t paths in this region is ~37°C/km, hotter than normal geothermal gradients. The parameters used for the model are a coefficient of thermal conductivity of 2.5 W/m°C, a density for the crust of 2700 kg/m3 and a heat capacity of 1170 J/Kg°C. Using the above starting conditions, a temperature of 1250°C is assumed for the mantle below 55 km, equivalent to placing asthenosphere in contact with the base of the crust to simulate delamination, basaltic underplating and/or asthenospheric exposure by a sudden steepening of slab. This condition at 55 km results in melting the amphibolite in the lower crust. Once a melt fraction of 10% is reached the melt is allowed to migrate to a depth of 13 km, while material at 13 km is displaced downwards to replace the ascending melts. The steady-state profile has a very steep geothermal gradient of more than 50°C/km from the surface to 13 km, consistent with the generation of andalusite

  10. Sundance: High-Level Software for PDE-Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Long

    2012-01-01

    Full Text Available Sundance is a package in the Trilinos suite designed to provide high-level components for the development of high-performance PDE simulators with built-in capabilities for PDE-constrained optimization. We review the implications of PDE-constrained optimization on simulator design requirements, then survey the architecture of the Sundance problem specification components. These components allow immediate extension of a forward simulator for use in an optimization context. We show examples of the use of these components to develop full-space and reduced-space codes for linear and nonlinear PDE-constrained inverse problems.

  11. Processing Constrained K Closest Pairs Query in Spatial Databases

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaofeng; LIU Yunsheng; XIAO Yingyuan

    2006-01-01

    In this paper, constrained K closest pairs query is introduced, which retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R-trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them,two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.

  12. Solving constrained traveling salesman problems by genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    WU Chunguo; LIANG Yanchun; LEE Heowpueh; LU Chun; LIN Wuzhong

    2004-01-01

    Three kinds of constrained traveling salesman problems (TSP) arising from application problems, namely the open route TSP, the end-fixed TSP, and the path-constrained TSP, are proposed. The corresponding approaches based on modified genetic algorithms (GA) for solving these constrained TSPs are presented. Numerical experiments demonstrate that the algorithm for the open route TSP shows its advantages when the open route is required, the algorithm for the end-fixed TSP can deal with route optimization with constraint of fixed ends effectively, and the algorithm for the path-constraint could benefit the traffic problems where some cities cannot be visited from each other.

  13. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  14. Warming, euxinia and sea level rise during the Paleocene–Eocene Thermal Maximum on the Gulf Coastal Plain: implications for ocean oxygenation and nutrient cycling

    NARCIS (Netherlands)

    Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.

    2014-01-01

    The Paleocene–Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean–atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations

  15. Warming, euxinia and sea level rise during the Paleocene–Eocene Thermal Maximum on the Gulf Coastal Plain: implications for ocean oxygenation and nutrient cycling

    NARCIS (Netherlands)

    Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.

    2014-01-01

    The Paleocene–Eocene Thermal Maximum(PETM, ?56 Ma) was a ?200 kyr episode of globalwarming, associated with massive injections of 13C-depletedcarbon into the ocean–atmosphere system. Although climatechange during the PETM is relatively well constrained,effects on marine oxygen concentrations and nut

  16. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  17. Maximum entropy production and the fluctuation theorem

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)

    2005-05-27

    Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)

  18. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  19. Post-maximum near infrared spectra of SN 2014J: A search for interaction signatures

    CERN Document Server

    Sand, D J; Banerjee, D P K; Marion, G H; Diamond, T R; Joshi, V; Parrent, J T; Phillips, M M; Stritzinger, M D; Venkataraman, V

    2016-01-01

    We present near infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The seventeen NIR spectra span epochs from +15.3 to +92.5 days after $B$-band maximum light, while the $JHK_s$ photometry include epochs from $-$10 to +71 days. This data is used to constrain the progenitor system of SN 2014J utilizing the Pa$\\beta$ line, following recent suggestions that this phase period and the NIR in particular are excellent for constraining the amount of swept up hydrogen-rich material associated with a non-degenerate companion star. We find no evidence for Pa$\\beta$ emission lines in our post-maximum spectra, with a rough hydrogen mass limit of $\\lesssim$0.1 $M_{\\odot}$, which is consistent with previous limits in SN 2014J from late-time optical spectra of the H$\\alpha$ line. Nonetheless, the growing dataset of high-quality NIR spectra holds the promise of very useful hydrogen constraints.

  20. The discrete maximum principle for finite element approximations of anisotropic diffusion problems on arbitrary meshes

    Energy Technology Data Exchange (ETDEWEB)

    Svyatskiy, Daniil [Los Alamos National Laboratory; Shashkov, Mikhail [Los Alamos National Laboratory; Kuzmin, D [DORTMUND UNIV

    2008-01-01

    A new approach to the design of constrained finite element approximations to second-order elliptic problems is introduced. This approach guarantees that the finite element solution satisfies the discrete maximum principle (DMP). To enforce these monotonicity constrains the sufficient conditions for elements of the stiffness matrix are formulated. An algebraic splitting of the stiffness matrix is employed to separate the contributions of diffusive and antidiffusive numerical fluxes, respectively. In order to prevent the formation of spurious undershoots and overshoots, a symmetric slope limiter is designed for the antidiffusive part. The corresponding upper and lower bounds are defined using an estimate of the steepest gradient in terms of the maximum and minimum solution values at surrounding nodes. The recovery of nodal gradients is performed by means of a lumped-mass L{sub 2} projection. The proposed slope limiting strategy preserves the consistency of the underlying discrete problem and the structure of the stiffness matrix (symmetry, zero row and column sums). A positivity-preserving defect correction scheme is devised for the nonlinear algebraic system to be solved. Numerical results and a grid convergence study are presented for a number of anisotropic diffusion problems in two space dimensions.

  1. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  2. Mantle Convection Models Constrained by Seismic Tomography

    Science.gov (United States)

    Durbin, C. J.; Shahnas, M.; Peltier, W. R.; Woodhouse, J. H.

    2011-12-01

    Perovskite-post-Perovskite transition (Murakami et al., 2004, Science) that appears to define the D" layer at the base of the mantle. In this initial phase of what will be a longer term project we are assuming that the internal mantle viscosity structure is spherically symmetric and compatible with the recent inferences of Peltier and Drummond (2010, Geophys. Res. Lett.) based upon glacial isostatic adjustment and Earth rotation constraints. The internal density structure inferred from the tomography model is assimilated into the convection model by continuously "nudging" the modification to the input density structure predicted by the convection model back towards the tomographic constraint at the long wavelengths that the tomography specifically resolves, leaving the shorter wavelength structure free to evolve, essentially "slaved" to the large scale structure. We focus upon the ability of the nudged model to explain observed plate velocities, including both their poloidal (divergence related) and toroidal (strike slip fault related) components. The true plate velocity field is then used as an additional field towards which the tomographically constrained solution is nudged.

  3. Time-dependent constrained Hamiltonian systems and Dirac brackets

    Energy Technology Data Exchange (ETDEWEB)

    Leon, Manuel de [Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Marrero, Juan C. [Departamento de Matematica Fundamental, Facultad de Matematicas, Universidad de La Laguna, La Laguna, Tenerife, Canary Islands (Spain); Martin de Diego, David [Departamento de Economia Aplicada Cuantitativa, Facultad de Ciencias Economicas y Empresariales, UNED, Madrid (Spain)

    1996-11-07

    In this paper the canonical Dirac formalism for time-dependent constrained Hamiltonian systems is globalized. A time-dependent Dirac bracket which reduces to the usual one for time-independent systems is introduced. (author)

  4. A note on causality constraining higher curvature corrections to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Gruzinov, A; Kleban, M [Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2007-07-07

    We show that causality constrains the sign of quartic Riemann corrections to the Einstein-Hilbert action. Our constraint constitutes a restriction on candidate theories of quantum gravity. (comments, replies and notes)

  5. Quantum gravity momentum representation and maximum energy

    Science.gov (United States)

    Moffat, J. W.

    2016-11-01

    We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.

  6. Maximum Information and Quantum Prediction Algorithms

    CERN Document Server

    McElwaine, J N

    1997-01-01

    This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.

  7. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  8. Maximum Segment Sum, Monadically (distilled tutorial

    Directory of Open Access Journals (Sweden)

    Jeremy Gibbons

    2011-09-01

    Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.

  9. Maximum Spectral Luminous Efficacy of White Light

    CERN Document Server

    Murphy, T W

    2013-01-01

    As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.

  10. Video segmentation using Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei

    2005-01-01

    Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.

  11. Maximum process problems in optimal control theory

    Directory of Open Access Journals (Sweden)

    Goran Peskir

    2005-01-01

    Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.

  12. Maximum entropy principle and texture formation

    CERN Document Server

    Arminjon, M; Arminjon, Mayeul; Imbault, Didier

    2006-01-01

    The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.

  13. Constrained minimization of smooth functions using a genetic algorithm

    Science.gov (United States)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  14. Robust Solutions of Uncertain Complex-valued Quadratically Constrained Programs

    Institute of Scientific and Technical Information of China (English)

    Da Chuan XU; Zheng Hai HUANG

    2008-01-01

    In this paper,we discuss complex convex quadratically constrained optimization with uncertain data.Using S-Lemma,we show that the robust counterpart of complex convex quadratically constrained optimization with ellipsoidal or intersection-of-two-ellipsoids uncertainty set leads to a complex semidefinite program.By exploring the approximate S-Lemma,we give a complex semidefinite program which approximates the NP-hard robust counterpart of complex convex quadratic optimization with intersection-of-ellipsoids uncertainty set.

  15. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  16. Canonical symmetry properties of the constrained singular generalized mechanical system

    Institute of Scientific and Technical Information of China (English)

    李爱民; 江金环; 李子平

    2003-01-01

    Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account, the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.

  17. Canonical symmetry properties of the constrained singular generalized mechanical system

    Institute of Scientific and Technical Information of China (English)

    LiAi-Min; JiangJin-Huan; LiZi-Ping

    2003-01-01

    Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account,the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.

  18. Geometric constrained variational calculus. II: The second variation (Part I)

    Science.gov (United States)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  19. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  20. Algorithms for degree-constrained Euclidean Steiner minimal tree

    Institute of Scientific and Technical Information of China (English)

    Zhang Jin; Ma Liang; Zhang Liantang

    2008-01-01

    A new problem of degree-constrained Euclidean Steiner minimal tree is discussed,which is quite useful in several fields.Although it is slightly different from the traditional degree-constrained minimal spanning tree,it is aho NP-hard.Two intelligent algorithms are proposed in an attempt to solve this difficult problem.Series of numerical examples are tested,which demonstrate that the algorithms also work well in practice.

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  2. Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.

    Science.gov (United States)

    Kim, Dae-Min; Kong, Yong-Ku

    2016-12-01

    A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.

  3. Constraining Gravity with LISA Detections of Binaries

    Science.gov (United States)

    Canizares, P.; Gair, J. R.; Sopuerta, C. F.

    2013-01-01

    General Relativity (GR) describes gravitation well at the energy scales which we have so far been able to achieve or detect. However, we do not know whether GR is behind the physics governing stronger gravitational field regimes, such as near neutron stars or massive black-holes (MBHs). Gravitational-wave (GW) astronomy is a promising tool to test and validate GR and/or potential alternative theories of gravity. The information that a GW waveform carries not only will allow us to map the strong gravitational field of its source, but also determine the theory of gravity ruling its dynamics. In this work, we explore the extent to which we could distinguish between GR and other theories of gravity through the detection of low-frequency GWs from extreme-mass-ratio inspirals (EMRIs) and, in particular, we focus on dynamical Chern-Simons modified gravity (DCSMG). To that end, we develop a framework that enables us, for the first time, to perform a parameter estimation analysis for EMRIs in DCSMG. Our model is described by a 15-dimensional parameter space, that includes the Chern-Simons (CS) parameter which characterises the deviation between the two theories, and our analysis is based on Fisher information matrix techniques together with a (maximum-mismatch) criterion to assess the validity of our results. In our analysis, we study a 5-dimensional parameter space, finding that a GW detector like the Laser Interferometer Space Antenna (LISA) or eLISA (evolved LISA) should be able to discriminate between GR and DCSMG with fractional errors below 5%, and hence place bounds four orders of magnitude better than current Solar System bounds.

  4. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-01-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we adapt model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature

  5. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-06-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a~significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land

  6. 20 CFR 211.14 - Maximum creditable compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...

  7. Finding maximum JPEG image block code size

    Science.gov (United States)

    Lakhani, Gopal

    2012-07-01

    We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.

  8. Theoretical Estimate of Maximum Possible Nuclear Explosion

    Science.gov (United States)

    Bethe, H. A.

    1950-01-31

    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  9. Maximum life spiral bevel reduction design

    Science.gov (United States)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-07-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  10. Proposed principles of maximum local entropy production.

    Science.gov (United States)

    Ross, John; Corlan, Alexandru D; Müller, Stefan C

    2012-07-12

    Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.

  11. Maximum entropy production and plant optimization theories.

    Science.gov (United States)

    Dewar, Roderick C

    2010-05-12

    Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.

  12. Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm

    Science.gov (United States)

    Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.

    2015-01-01

    Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions

  13. Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.

    Directory of Open Access Journals (Sweden)

    Wout Megchelenbrink

    Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and

  14. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  15. Evidence that the maximum electron energy in hotspots of FR II galaxies is not determined by synchrotron cooling

    CERN Document Server

    Araudo, Anabella T; Crilly, Aidan; Blundell, Katherine M

    2016-01-01

    It has been suggested that relativistic shocks in extragalactic sources may accelerate the highest energy cosmic rays. The maximum energy to which cosmic rays can be accelerated depends on the structure of magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a PeV. We study the hotspots of powerful radiogalaxies, where electrons accelerated at the termination shock emit synchrotron radiation. The turnover of the synchrotron spectrum is typically observed between infrared and optical frequencies, indicating that the maximum energy of non-thermal electrons accelerated at the shock is < TeV for a canonical magnetic field of ~100 micro Gauss. Based on theoretical considerations we show that this maximum energy cannot be constrained by synchrotron losses as usually assumed, unless the jet density is unreasonably large and most of the jet upstream energy goes to non-thermal particles. We test ...

  16. Using qflux to constrain modeled Congo Basin rainfall in the CMIP5 ensemble

    Science.gov (United States)

    Creese, A.; Washington, R.

    2016-11-01

    Coupled models are the tools by which we diagnose and project future climate, yet in certain regions they are critically underevaluated. The Congo Basin is one such region which has received limited scientific attention, due to the severe scarcity of observational data. There is a large difference in the climatology of rainfall in global coupled climate models over the basin. This study attempts to address this research gap by evaluating modeled rainfall magnitude and distribution amongst global coupled models in the Coupled Model Intercomparison Project 5 (CMIP5) ensemble. Mean monthly rainfall between models varies by up to a factor of 5 in some months, and models disagree on the location of maximum rainfall. The ensemble mean, which is usually considered a "best estimate" of coupled model output, does not agree with any single model, and as such is unlikely to present a possible rainfall state. Moisture flux (qflux) convergence (which is assumed to be better constrained than parameterized rainfall) is found to have a strong relationship with rainfall; strongest correlations occur at 700 hPa in March-May (r = 0.70) and 850 hPa in June-August, September-November, and December-February (r = 0.66, r = 0.71, and r = 0.81). In the absence of observations, this relationship could be used to constrain the wide spectrum of modeled rainfall and give a better understanding of Congo rainfall climatology. Analysis of moisture transport pathways indicates that modeled rainfall is sensitive to the amount of moisture entering the basin. A targeted observation campaign at key Congo Basin boundaries could therefore help to constrain model rainfall.

  17. Dynamic Resource Management for Parallel Tasks in an Oversubscribed Energy-Constrained Heterogeneous Environment

    Energy Technology Data Exchange (ETDEWEB)

    Imam, Neena [ORNL; Koenig, Gregory A [ORNL; Machovec, Dylan [Colorado State University, Fort Collins; Khemka, Bhavesh [Colorado State University, Fort Collins; Pasricha, Sudeep [Colorado State University; Maciejewski, Anthony A [Colorado State University, Fort Collins; Siegel, Howard [Colorado State University, Fort Collins; Wright, Michael [Department of Defense; Hilton, Marcia [Department of Defense; Rambharos, Rejendra [Department of Defense

    2016-01-01

    Abstract: The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of such a system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 hours). To maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy-constrained, we have designed, analyzed, and compared different heuristic techniques. Four utility-aware heuristics (i.e., Max Utility, Max Utility-per-Time, Max Utility-per-Resource, and Max Utility-per-Energy), three FCFS-based heuristics (Conservative Backfilling, EASY Backfilling, and FCFS with Multiple Queues), and a Random heuristic were examined in this study. A technique that is often used with the FCFS-based heuristics is the concept of a permanent reservation. We compare the performance of permanent reservations with temporary place-holders to demonstrate the advantages that place-holders can provide. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in an energy-constrained oversubscribed HPC environment. With place-holders, energy filtering, and dropping tasks with low potential utility, our utility-aware heuristics are able to significantly outperform the existing FCFS-based techniques.

  18. Delay-Constrained Optimized Packet Aggregation in High-Speed Wireless Networks

    Institute of Scientific and Technical Information of China (English)

    Peyman Teymoori; Nasser Yazdani

    2013-01-01

    High-speed wireless networks such as IEEE 802.11n have been introduced based on IEEE 802.11 to meet the growing demand for high-throughput and multimedia applications.It is known that the medium access control (MAC) efficiency of IEEE 802.11 decreases with increasing the physical rate.To improve efficiency,few solutions have been proposed such as Aggregation to concatenate a number of packets into a larger frame and send it at once to reduce the protocol overhead.Since transmitting larger frames eventuates to dramatic delay and jitter increase in other nodes,bounding the maximum aggregated frame size is important to satisfy delay requirements of especially multimedia applications.In this paper,we propose a scheme called Optimized Packet Aggregation (OPA) which models the network by constrained convex optimization to obtain the optimal aggregation size of each node regarding to delay constraints of other nodes.OPA attains proportionally fair sharing of the channel while satisfying delay constrains.Furthermore,reaching the optimal point is guaranteed in OPA with low complexity.Simulation results show that OPA can successfully bound delay and meet the requirements of nodes with only an insignificant throughput penalty due to limiting the aggregation size even in dynamic conditions.

  19. Optimum placement of piezoelectric ceramic modules for vibration suppression of highly constrained structures

    Science.gov (United States)

    Belloli, Alberto; Ermanni, Paolo

    2007-10-01

    The vibration suppression efficiency of so-called shunted piezoelectric systems is decisively influenced by the number, shape, dimensions and position of the piezoelectric ceramic elements integrated into the structure. This paper presents a procedure based on evolutionary algorithms for optimum placement of piezoelectric ceramic modules on highly constrained lightweight structures. The optimization loop includes the CAD software CATIA V5, the FE package ANSYS and DynOPS, a proprietary software tool able to connect the Evolving Object library with any simulation software that can be started in batch mode. A user-defined piezoelectric shell element is integrated into ANSYS 9.0. The generalized electromechanical coupling coefficient is used as the optimization objective. Position, dimensions, orientation, embedding location in the composite lay-up and wiring of customized patches are determined for optimum vibration suppression under consideration of operational and manufacturing constraints, such as added mass, maximum strain and requirements on the control circuit. A rear wing of a racing car is investigated as the test object for complex, highly constrained geometries.

  20. The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.

  1. Constraining the Kerr parameters via X-ray reflection spectroscopy

    CERN Document Server

    Ghasemi-Nodehi, M

    2016-01-01

    In a recent paper [Ghasemi-Nodehi & Bambi, EPJC 76 (2016) 290], we have proposed a new parametrization for testing the Kerr nature of astrophysical black hole candidates. In the present work, we study the possibility of constraining the "Kerr parameters" of our proposal using X-ray reflection spectroscopy, the so-called iron line method. We simulate observations with the LAD instrument on board of the future eXTP mission assuming an exposure time of 200 ks. We fit the simulated data to see if the Kerr parameters can be constrained. If we have the correct astrophysical model, 200 ks observations with LAD/eXTP can constrain all the Kerr parameters with the exception of $b_{11}$, whose impact on the iron line profile is extremely weak and its measurement looks very challenging.

  2. Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms

    CERN Document Server

    Duda, Jarek

    2007-01-01

    In this paper it is shown how to almost optimally encode information in valuations of discrete lattice with some translational invariant constrains. The method is based on finding statistical description of such valuations and changing it into statistical algorithm: which allow to construct deterministically valuation with given statistics. Optimal statistic allows to generate valuations with uniform distribution - we get this way maximum information capacity. It will be shown that in this approach we practically can get as close to capacity of the model as we want (found numerically: lost 1e-10 bit/node for Hard Square). There will be presented an alternative to Huffman coding too, which is more precise and practice with changing probability distributions.

  3. PASTIS: Bayesian extrasolar planet validation II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    CERN Document Server

    Santerne, A; Almenara, J -M; Bouchy, F; Deleuil, M; Figueira, P; Hébrard, G; Moutou, C; Rodionov, S; Santos, N C

    2015-01-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anti-correlated with the radial velocity one, as in the case of stellar spots. In those cases, the full width half maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We r...

  4. Genetic algorithm to solve constrained routing problem with applications for cruise missile routing

    Science.gov (United States)

    Latourell, James L.; Wallet, Bradley C.; Copeland, Bruce

    1998-03-01

    In this paper the use of a Genetic Algorithm to solve a constrained vehicle routing problem is explored. The problem is two-dimensional with obstacles represented as ellipses of uncertainty surrounding each obstacle point. A route is defined as a series of points through which the vehicle sequentially travels from the starting point to the ending point. The physical constraints of total route length and maximum turn angle are included and appear in the fitness function. In order to be valid, a route must go from start to finish without violating any constraint. The effects that different mutation rates and population sizes have on the algorithm's computation speed and ability to find a high quality route are also explored. Finally, possible applications of this algorithm to the problem of route planning for cruise missiles are discussed.

  5. Analysis of censored exposure data by constrained maximization of the Shapiro-Wilk W statistic.

    Science.gov (United States)

    Flynn, Michael R

    2010-04-01

    A new method for estimating the mean and standard deviation from censored exposure data is presented. The method W(MAX) treats the censored data as variables in a constrained optimization problem. Values for the censored data are calculated by maximizing the Shapiro-Wilk W statistic subject to the constraint that the values are between 0 and the limit of detection (or other censoring limit). The methodology is illustrated here with the Microsoft Excel Solver tool using real exposure data sets subject to repeated censoring. For the data sets explored here, the W(MAX) estimates are comparable to those obtained using the restricted maximum likelihood method based on bias as the performance index.

  6. Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysis

    Science.gov (United States)

    Westhoff, Martijn; Zehe, Erwin; Archambeau, Pierre; Dewals, Benjamin

    2016-04-01

    Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in an inverse manner such that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporations, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the - with the maximum power principle optimized - model with the asymptotes of the Budyko curve we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.

  7. Constrained caloric curves and phase transition for hot nuclei

    CERN Document Server

    Borderie, Bernard; Rivet, M F; Raduta, Ad R; Ademard, G; Bonnet, E; Bougault, R; Chbihi, A; Frankland, J D; Galichet, E; Gruyer, D; Guinet, D; Lautesse, P; Neindre, N Le; Lopez, O; Marini, P; Parlog, M; Pawlowski, P; Rosato, E; Roy, R; Vigilante, M

    2013-01-01

    Simulations based on experimental data obtained from multifragmenting quasi-fused nuclei produced in central $^{129}$Xe + $^{nat}$Sn collisions have been used to deduce event by event freeze-out properties in the thermal excitation energy range 4-12 AMeV [Nucl. Phys. A809 (2008) 111]. From these properties and the temperatures deduced from proton transverse momentum fluctuations, constrained caloric curves have been built. At constant average volumes caloric curves exhibit a monotonic behaviour whereas for constrained pressures a backbending is observed. Such results support the existence of a first order phase transition for hot nuclei.

  8. Performance Comparison of Constrained Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Soudeh Babaeizadeh

    2015-06-01

    Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.

  9. A Novel Approach to Constraining Uncertain Stellar Evolution Models

    Science.gov (United States)

    Rosenfield, Philip; Girardi, Leo; Dalcanton, Julianne; Johnson, L. C.; Williams, Benjamin F.; Weisz, Daniel R.; Bressan, Alessandro; Fouesneau, Morgan

    2017-01-01

    Stellar evolution models are fundamental to nearly all studies in astrophysics. They are used to interpret spectral energy distributions of distant galaxies, to derive the star formation histories of nearby galaxies, and to understand fundamental parameters of exoplanets. Despite the success in using stellar evolution models, some important aspects of stellar evolution remain poorly constrained and their uncertainties rarely addressed. We present results using archival Hubble Space Telescope observations of 10 stellar clusters in the Magellanic Clouds to simultaneously constrain the values and uncertainties of the strength of core convective overshooting, metallicity, interstellar extinction, cluster distance, binary fraction, and age.

  10. Augmented Lagrangian Method for Constrained Nuclear Density Functional Theory

    CERN Document Server

    Staszczak, A; Baran, A; Nazarewicz, W

    2010-01-01

    The augmented Lagrangiam method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia; and is well adapted to supercomputer applications.

  11. Fast Energy Minimization of large Polymers Using Constrained Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Todd D. Plantenga

    1998-10-01

    A new computational technique is described that uses distance constraints to calculate empirical potential energy minima of partially rigid molecules. A constrained minimuzation algorithm that works entirely in Cartesian coordinates is used. The algorithm does not obey the constraints until convergence, a feature that reduces ill-conditioning and allows constrained local minima to be computed more quickly than unconstrained minima. Computational speedup exceeds the 3-fold factor commonly obtained in constained molecular dynamics simulations, where the constraints must be strictly obeyed at all times.

  12. Generalized constrained multiobjective games in locally FC-uniform spaces

    Institute of Scientific and Technical Information of China (English)

    DING Xie-ping; LEE Chin-san; YAO Jen-chih

    2008-01-01

    A new class of generalized constrained multiobjective games is introduced and studied in locally FC-uniform spaces without convexity structure where the number of players may be finite or infinite and all payoff functions get their values in an infinite-dimensional space.By using a Himmelberg type fixed point theorem in locally FC-uniform spaces due to author,some existence theorems of weak Pareto equilibria for the generalized constrained multiobjective games are established in locally FC-uniform spaces.These theorems improve,unify and generalize the corresponding results in recent literatares.

  13. A lexicographic approach to constrained MDP admission control

    Science.gov (United States)

    Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo

    2016-02-01

    This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.

  14. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  15. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  16. A physically constrained classical description of the homogeneous nucleation of ice in water

    Science.gov (United States)

    Koop, Thomas; Murray, Benjamin J.

    2016-12-01

    Liquid water can persist in a supercooled state to below 238 K in the Earth's atmosphere, a temperature range where homogeneous nucleation becomes increasingly probable. However, the rate of homogeneous ice nucleation in supercooled water is poorly constrained, in part, because supercooled water eludes experimental scrutiny in the region of the homogeneous nucleation regime where it can exist only fleetingly. Here we present a new parameterization of the rate of homogeneous ice nucleation based on classical nucleation theory. In our approach, we constrain the key terms in classical theory, i.e., the diffusion activation energy and the ice-liquid interfacial energy, with physically consistent parameterizations of the pertinent quantities. The diffusion activation energy is related to the translational self-diffusion coefficient of water for which we assess a range of descriptions and conclude that the most physically consistent fit is provided by a power law. The other key term is the interfacial energy between the ice embryo and supercooled water whose temperature dependence we constrain using the Turnbull correlation, which relates the interfacial energy to the difference in enthalpy between the solid and liquid phases. The only adjustable parameter in our model is the absolute value of the interfacial energy at one reference temperature. That value is determined by fitting this classical model to a selection of laboratory homogeneous ice nucleation data sets between 233.6 K and 238.5 K. On extrapolation to temperatures below 233 K, into a range not accessible to standard techniques, we predict that the homogeneous nucleation rate peaks between about 227 and 231 K at a maximum nucleation rate many orders of magnitude lower than previous parameterizations suggest. This extrapolation to temperatures below 233 K is consistent with the most recent measurement of the ice nucleation rate in micrometer-sized droplets at temperatures of 227-232 K on very short time scales

  17. Geographic variation of surface energy partitioning in the climatic mean predicted from the maximum power limit

    CERN Document Server

    Dhara, Chirag; Kleidon, Axel

    2015-01-01

    Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.

  18. RESEARCH OF PINYIN-TO-CHARACTER CONVERSION BASED ON MAXIMUM ENTROPY MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhao Yan; Wang Xiaolong; Liu Bingquan; Guan Yi

    2006-01-01

    This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion instead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical information. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of yA → yB/cB was proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to select conversion trigger pair features which were added to the ME model. The experiment shows that conversion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.

  19. Quality of service estimation based on maximum bottleneck algorithm for domain aggregation in backbone networks

    Institute of Scientific and Technical Information of China (English)

    WANG Yang; ZHAN Yi-chun; YU Shao-hua

    2007-01-01

    This paper investigates the routing among autonomous systems (ASs) with quality of service (QoS) requirements. To avoid the intractability of the problem, abstract QoS capability must be informed among ASs, because the routhing which constrained QoS has been proved to be nondeterministic polynomial-time (NP) hard even inside an AS. This paper employs the modified Dijkstra algorithm to compute the maximum bottleneck bandwidth inside an AS. This approach lays a basis for the AS-level switching capability on which interdomain advertisement can be performed. Furthermore, the paper models the aggregated traffic in backbone network with fractional Brownian motion (FBM), and by integrating along the time axis in short intervals, a good estimation of the distribution of queue length in the next short intervals can be obtained. The proposed advertisement mechanism can be easily implemented with the current interdomain routing protocols. Numerical study indicates that the presented scheme is effective and feasible.

  20. Statistical optimization for passive scalar transport: maximum entropy production vs. maximum Kolmogorov–Sinay entropy

    Directory of Open Access Journals (Sweden)

    M. Mihelich

    2014-11-01

    Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.

  1. 40 CFR 94.107 - Determination of maximum test speed.

    Science.gov (United States)

    2010-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  2. 14 CFR 25.1505 - Maximum operating limit speed.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  3. Maximum Performance Tests in Children with Developmental Spastic Dysarthria.

    Science.gov (United States)

    Wit, J.; And Others

    1993-01-01

    Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…

  4. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    of torsionally constrained DNA by free RNAP. We asked whether or not a newly synthesized RNA chain would limit transcription elongation. For this purpose we developed a method to immobilize covalently closed circular DNA to streptavidin-coated beads via a peptide nucleic acid (PNA)-biotin conjugate in principle...

  5. Adaptive double chain quantum genetic algorithm for constrained optimization problems

    Directory of Open Access Journals (Sweden)

    Kong Haipeng

    2015-02-01

    Full Text Available Optimization problems are often highly constrained and evolutionary algorithms (EAs are effective methods to tackle this kind of problems. To further improve search efficiency and convergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP, adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem.

  6. Reserve-constrained economic dispatch: Cost and payment allocations

    Energy Technology Data Exchange (ETDEWEB)

    Misraji, Jaime [Sistema Electrico Nacional Interconectado de la Republica Dominicana, Calle 3, No. 3, Arroyo Hondo 1, Santo Domingo, Distrito Nacional (Dominican Republic); Conejo, Antonio J.; Morales, Juan M. [Department of Electrical Engineering, Universidad de Castilla-La Mancha, Campus Universitario s/n, 13071 Ciudad Real (Spain)

    2008-05-15

    This paper extends basic economic dispatch analytical results to the reserve-constrained case. For this extended problem, a cost and payment allocation analysis is carried out and a detailed economic interpretation of the results is provided. Sensitivity values (Lagrange multipliers) are also analyzed. A case study is considered to illustrate the proposed analysis. Conclusions are duly drawn. (author)

  7. Constrained Hartree-Fock and quasi-spin projection

    Science.gov (United States)

    Cambiaggio, M. C.; Plastino, A.; Szybisz, L.

    1980-08-01

    The constrained Hartree-Fock approach of Elliott and Evans is studied in detail with reference to two quasi-spin models, and their predictions compared with those arising from a projection method. It is found that the new approach works fairly well, although limitations to its applicability are encountered.

  8. Dirac's Constrained Hamiltonian Dynamics from an Unconstrained Dynamics

    OpenAIRE

    Rothe, Heinz J.

    2003-01-01

    We derive the Hamilton equations of motion for a constrained system in the form given by Dirac, by a limiting procedure, starting from the Lagrangean for an unconstrained system. We thereby ellucidate the role played by the primary constraints and their persistance in time.

  9. Joint Force Interdependence for a Fiscally Constrained Future

    Science.gov (United States)

    2013-03-01

    Army 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Dr. Richard Meinhart ...Joint Force Interdependence For A Fiscally Constrained Future by Colonel Daniel P. Ray United States Army ...United States Army War College Class of 2013 DISTRIBUTION STATEMENT: A Approved for Public Release Distribution is Unlimited

  10. Constrained Local UniversE Simulations: A Local Group Factory

    CERN Document Server

    Carlesi, Edoardo; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I; Pilipenko, Sergey V; Knebe, Alexander; Courtois, Helene; Tully, R Brent; Steinmetz, Matthias

    2016-01-01

    Near field cosmology is practiced by studying the Local Group (LG) and its neighbourhood. The present paper describes a framework for simulating the near field on the computer. Assuming the LCDM model as a prior and applying the Bayesian tools of the Wiener filter (WF) and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the LCDM scenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of halos must obey specific isolation, mass and separation criteria. At the second level the orbital angular momentum and energy are constrained and on the third one the phase of the orbit is constrained. Out of the 300 constrai...

  11. Bounds on the capacity of constrained two-dimensional codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2000-01-01

    Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run...

  12. Constrained variational calculus: the second variation (part I)

    CERN Document Server

    Massa, Enrico; Pagani, Enrico; Luria, Gianvittorio

    2010-01-01

    This paper is a direct continuation of arXiv:0705.2362 . The Hamiltonian aspects of the theory are further developed. Within the framework provided by the first paper, the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A necessary and sufficient condition for minimality is proved.

  13. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  14. Constrained control of a once-through boiler with recirculation

    DEFF Research Database (Denmark)

    Trangbæk, K

    2008-01-01

    There is an increasing need to operate power plants at low load for longer periods of time. When a once-through boiler operates at a sufficiently low load, recirculation is introduced, significantly altering the control structure. This paper illustrates the possibilities for using constrained con...

  15. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the fusi

  16. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    A main characteristic of the Internet of Things networks is the large number of resource-constrained nodes, which, however, are required to perform reliable and fast data exchange; often of critical nature; over highly unpredictable and dynamic connections and network topologies. Reducing...

  17. Evaluating potentialities and constrains of Problem Based Learning curriculum

    DEFF Research Database (Denmark)

    Guerra, Aida

    2013-01-01

    This paper presents a research design to evaluate Problem Based Learning (PBL) curriculum potentialities and constrains for future changes. PBL literature lacks examples of how to evaluate and analyse established PBL learning environments to address new challenges posed. The research design......) in the curriculum and a mean to choose cases for further case study (third phase)....

  18. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    Science.gov (United States)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  19. Constrained Superfields and Standard Realization of Nonlinear Supersymmetry

    CERN Document Server

    Luo, Hui; Zheng, Sibo

    2009-01-01

    A constrained superfield formalism has been proposed recently to analyze the low energy physics related to Goldstinos. We prove that this formalism can be reformulated in the language of standard realization of nonlinear supersymmetry. New relations have been uncovered in the standard realization of nonlinear supersymmetry.

  20. Steepest-Ascent Constrained Simultaneous Perturbation for Multiobjective Optimization

    DEFF Research Database (Denmark)

    McClary, Dan; Syrotiuk, Violet; Kulahci, Murat

    2011-01-01

    that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)(2) to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self-organizing wireless network. The results show that SP...

  1. Constrained Transport vs. Divergence Cleanser Options in Astrophysical MHD Simulations

    Science.gov (United States)

    Lindner, Christopher C.; Fragile, P.

    2009-01-01

    In previous work, we presented results from global numerical simulations of the evolution of black hole accretion disks using the Cosmos++ GRMHD code. In those simulations we solved the magnetic induction equation using an advection-split form, which is known not to satisfy the divergence-free constraint. To minimize the build-up of divergence error, we used a hyperbolic cleanser function that simultaneously damped the error and propagated it off the grid. We have since found that this method produces qualitatively and quantitatively different behavior in high magnetic field regions than results published by other research groups, particularly in the evacuated funnels of black-hole accretion disks where Poynting-flux jets are reported to form. The main difference between our earlier work and that of our competitors is their use of constrained-transport schemes to preserve a divergence-free magnetic field. Therefore, to study these differences directly, we have implemented a constrained transport scheme into Cosmos++. Because Cosmos++ uses a zone-centered, finite-volume method, we can not use the traditional staggered-mesh constrained transport scheme of Evans & Hawley. Instead we must implement a more general scheme; we chose the Flux-CT scheme as described by Toth. Here we present comparisons of results using the divergence-cleanser and constrained transport options in Cosmos++.

  2. Multiply-Constrained Semantic Search in the Remote Associates Test

    Science.gov (United States)

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  3. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    real difference is how the coordinating master problem - a concave non-differentiable maximization problem - is solved. We show how the constrained shortest path problem can be solved efficiently, and present a number of different strategies for solving the master problem. The lower bound obtainable...

  4. Revenue Prediction in Budget-constrained Sequential Auctions with Complementarities

    NARCIS (Netherlands)

    S. Verwer (Sicco); Y. Zhang (Yingqian)

    2011-01-01

    textabstractWhen multiple items are auctioned sequentially, the ordering of auctions plays an important role in the total revenue collected by the auctioneer. This is true especially with budget constrained bidders and the presence of complementarities among items. In such sequential auction setting

  5. Robust discriminative response map fitting with constrained local models

    NARCIS (Netherlands)

    Asthana, Akshay; Zafeiriou, Stefanos; Cheng, Shiyang; Pantic, Maja

    2013-01-01

    We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, u

  6. Using Diagnostic Text Information to Constrain Situation Models

    NARCIS (Netherlands)

    Dutke, S.; Baadte, C.; Hähnel, A.; Hecker, U. von; Rinck, M.

    2010-01-01

    During reading, the model of the situation described by the text is continuously accommodated to new text input. The hypothesis was tested that readers are particularly sensitive to diagnostic text information that can be used to constrain their existing situation model. In 3 experiments, adult part

  7. Adaptive double chain quantum genetic algorithm for constrained optimization problems

    Institute of Scientific and Technical Information of China (English)

    Kong Haipeng; Li Ni; Shen Yuzhong

    2015-01-01

    Optimization problems are often highly constrained and evolutionary algorithms (EAs) are effective methods to tackle this kind of problems. To further improve search efficiency and con-vergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA) for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation) functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE) are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP), adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem.

  8. Non-rigid registration by geometry-constrained diffusion

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt; Nielsen, Mads

    1999-01-01

    are not given. We will advocate the viewpoint that the aperture and the 3D interpolation problem may be solved simultaneously by finding the simplest displacement field. This is obtained by a geometry-constrained diffusion which yields the simplest displacement field in a precise sense. The point registration...

  9. Reflections on How Color Term Acquisition Is Constrained

    Science.gov (United States)

    Pitchford, Nicola J.

    2006-01-01

    Compared with object word learning, young children typically find learning color terms to be a difficult linguistic task. In this reflections article, I consider two questions that are fundamental to investigations into the developmental acquisition of color terms. First, I consider what constrains color term acquisition and how stable these…

  10. Applications of a Constrained Mechanics Methodology in Economics

    Science.gov (United States)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  11. Post-maximum Near-infrared Spectra of SN 2014J: A Search for Interaction Signatures

    Science.gov (United States)

    Sand, D. J.; Hsiao, E. Y.; Banerjee, D. P. K.; Marion, G. H.; Diamond, T. R.; Joshi, V.; Parrent, J. T.; Phillips, M. M.; Stritzinger, M. D.; Venkataraman, V.

    2016-05-01

    We present near-infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The 17 NIR spectra span epochs from +15.3 to +92.5 days after B-band maximum light, while the {{JHK}}s photometry include epochs from -10 to +71 days. These data are used to constrain the progenitor system of SN 2014J utilizing the Paβ line, following recent suggestions that this phase period and the NIR in particular are excellent for constraining the amount of swept-up hydrogen-rich material associated with a non-degenerate companion star. We find no evidence for Paβ emission lines in our post-maximum spectra, with a rough hydrogen mass limit of ≲ 0.1 M ⊙, which is consistent with previous limits in SN 2014J from late-time optical spectra of the Hα line. Nonetheless, the growing data set of high-quality NIR spectra holds the promise of very useful hydrogen constraints. Based on observations obtained at the Gemini Observatory under program GN-2014A-Q-8 (PI: Sand). Gemini is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).

  12. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  13. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2012-07-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in some cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while δ15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a single firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to allow quantification of the uncertainties.

  14. Improved parameterized complexity of the maximum agreement subtree and maximum compatible tree problems.

    Science.gov (United States)

    Berry, Vincent; Nicolas, François

    2006-01-01

    Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.

  15. Fusion of visible and infrared images using global entropy and gradient constrained regularization

    Science.gov (United States)

    Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Zang, Yue; Tao, Shuyin; Wang, Daodang

    2017-03-01

    Infrared and visible image fusion has been an important and popular topic in imaging science. Dual-band image fusion aims to extract both target regions in infrared image and abundant detail information in visible image into fused result, preserving even enhancing the information that inherits from source images. In our study, we propose an optimization-based fusion method by combining global entropy and gradient constrained regularization. We design a cost function by taking the advantages of global maximum entropy as the first term, together with gradient constraint as the regularized term. In this cost function, global maximum entropy could make the fused result inherit as more information as possible from sources. And using gradient constraint, the fused result would have clear details and edges with noise suppression. The fusion is achieved based on the minimization of the cost function by adding weight value matrix. Experimental results indicate that the proposed method performs well and has obvious superiorities over other typical algorithms in both subjective visual performance and objective criteria.

  16. Constraining Type Ia supernova models: SN 2011fe as a test case

    CERN Document Server

    Roepke, F K; Seitenzahl, I R; Pakmor, R; Sim, S A; Taubenberger, S; Ciaraldi-Schoolmann, F; Hillebrandt, W; Aldering, G; Antilogus, P; Baltay, C; Benitez-Herrera, S; Bongard, S; Buton, C; Canto, A; Cellier-Holzem, F; Childress, M; Chotard, N; Copin, Y; Fakhouri, H K; Fink, M; Fouchez, D; Gangler, E; Guy, J; Hachinger, S; Hsiao, E Y; Juncheng, C; Kerschhaggl, M; Kowalski, M; Nugent, P; Paech, K; Pain, R; Pecontal, E; Pereira, R; Perlmutter, S; Rabinowitz, D; Rigault, M; Runge, K; Saunders, C; Smadja, G; Suzuki, N; Tao, C; Thomas, R C; Tilquin, A; Wu, C

    2012-01-01

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs-realizations of explosion models appropriate for two of the most widely-discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is s...

  17. CONSTRAINING TYPE Ia SUPERNOVA MODELS: SN 2011fe AS A TEST CASE

    Energy Technology Data Exchange (ETDEWEB)

    Roepke, F. K.; Seitenzahl, I. R. [Institut fuer Theoretische Physik und Astrophysik, Universitaet Wuerzburg, Am Hubland, D-97074 Wuerzburg (Germany); Kromer, M.; Taubenberger, S.; Ciaraldi-Schoolmann, F.; Hillebrandt, W.; Benitez-Herrera, S. [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Pakmor, R. [Heidelberger Institut fuer Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg (Germany); Sim, S. A. [Research School of Astronomy and Astrophysics, Australian National University, Mount Stromlo Observatory, Cotter Road, Weston Creek, ACT 2611 (Australia); Aldering, G.; Childress, M.; Fakhouri, H. K. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, Universite Pierre et Marie Curie Paris 6, Universite Paris Diderot Paris 7, CNRS-IN2P3, 4 place Jussieu, 75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C. [Physikalisches Institut, Universitaet Bonn, Nussallee 12, 53115 Bonn (Germany); Chotard, N.; Copin, Y. [Universite de Lyon, F-69622, Lyon (France); Universite de Lyon 1, Villeurbanne (France); CNRS/IN2P3, Institut de Physique Nucleaire de Lyon (France); and others

    2012-05-01

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs (WDs)-realizations of explosion models appropriate for two of the most widely discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the {sup 55}Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect.

  18. Design, synthesis and evaluation of constrained methoxyethyl (cMOE) and constrained ethyl (cEt) nucleoside analogs.

    Science.gov (United States)

    Seth, Punit P; Siwkowski, Andrew; Allerson, Charles R; Vasquez, Guillermo; Lee, Sam; Prakash, Thazha P; Kinberger, Garth; Migawa, Michael T; Gaus, Hans; Bhat, Balkrishen; Swayze, Eric E

    2008-01-01

    Antisense drug discovery technology is a powerful method to modulate gene expression in animals and represents a novel therapeutic platform.(1) We have previously demonstrated that replacing 2'O-methoxyethyl (MOE, 2) residues in second generation antisense oligonucleotides (ASOs) with LNA (3) nucleosides improves the potency of some ASOs in animals. However, this was accompanied with a significant increase in the risk for hepatotoxicity.(2) We hypothesized that replacing LNA with novel nucleoside monomers that combine the structural elements of MOE and LNA might mitigate the toxicity of LNA while maintaining potency. To this end we designed and prepared novel nucleoside analogs 4 (S-constrained MOE, S-cMOE) and 5 (R-constrained MOE, R-cMOE) where the ethyl chain of the 2'O-MOE moiety is constrained back to the 4' position of the furanose ring. As part of the SAR series, we also prepared nucleoside analogs 7 (S-constrained ethyl, S-cEt) and 8 (R-constrained Ethyl, R-cEt) where the methoxymethyl group in the cMOE nucleosides was replaced with a methyl substituent. A highly efficient synthesis of the nucleoside phosphoramidites with minimal chromatography purifications was developed starting from cheap commercially available starting materials. Biophysical evaluation revealed that the cMOE and cEt modifications hybridize complementary nucleic acids with the same affinity as LNA while greatly increasing nuclease stability. Biological evaluation of oligonucleotides containing the cMOE and cEt modification in animals indicated that all of them possessed superior potency as compared to second generation MOE ASOs and a greatly improved toxicity profile as compared to LNA.

  19. Present and Last Glacial Maximum climates as states of maximum entropy production

    CERN Document Server

    Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere

    2011-01-01

    The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...

  20. Gravitino dark matter with neutralino NLSP in the constrained NMSSM

    CERN Document Server

    Panotopoulos, Grigoris

    2010-01-01

    The gravitino dark matter with neutralino NLSP hypothesis is investigated in the framework of NMSSM. We have considered both the thermal and non-thermal gravitino production mechanisms, and we have taken into account all the collider and cosmological constraints. The maximum allowed reheating temperature after inflation, as well as the maximum allowed gravitino mass are determined.

  1. Entropy Bounds for Constrained Two-Dimensional Fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Justesen, Jørn

    1999-01-01

    The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived.......The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived....

  2. Solving the constrained shortest path problem using random search strategy

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we propose an improved walk search strategy to solve the constrained shortest path problem. The proposed search strategy is a local search algorithm which explores a network by walker navigating through the network. In order to analyze and evaluate the proposed search strategy, we present the results of three computational studies in which the proposed search algorithm is tested. Moreover, we compare the proposed algorithm with the ant colony algorithm and k shortest paths algorithm. The analysis and comparison results demonstrate that the proposed algorithm is an effective tool for solving the constrained shortest path problem. It can not only be used to solve the optimization problem on a larger network, but also is superior to the ant colony algorithm in terms of the solution time and optimal paths.

  3. Bayesian methods for the analysis of inequality constrained contingency tables.

    Science.gov (United States)

    Laudy, Olav; Hoijtink, Herbert

    2007-04-01

    A Bayesian methodology for the analysis of inequality constrained models for contingency tables is presented. The problem of interest lies in obtaining the estimates of functions of cell probabilities subject to inequality constraints, testing hypotheses and selection of the best model. Constraints on conditional cell probabilities and on local, global, continuation and cumulative odds ratios are discussed. A Gibbs sampler to obtain a discrete representation of the posterior distribution of the inequality constrained parameters is used. Using this discrete representation, the credibility regions of functions of cell probabilities can be constructed. Posterior model probabilities are used for model selection and hypotheses are tested using posterior predictive checks. The Bayesian methodology proposed is illustrated in two examples.

  4. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    Energy Technology Data Exchange (ETDEWEB)

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  5. Assessing working memory capacity through time-constrained elementary activities.

    Science.gov (United States)

    Lucidi, Annalisa; Loaiza, Vanessa; Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory (WM) capacity measured through complex span tasks is among the best predictors of fluid intelligence (Gf). These tasks usually involve maintaining memoranda while performing complex cognitive activities that require a rather high level of education (e.g., reading comprehension, arithmetic), restricting their range of applicability. Because individual differences in such complex activities are nothing more than the concatenation of small differences in their elementary constituents, complex span tasks involving elementary processes should be as good of predictors of Gf as traditional tasks. The present study showed that two latent variables issued from either traditional or new span tasks involving time-constrained elementary activities were similarly correlated with Gf. Moreover, a model with a single unitary WM factor had a similar fit as a model with two distinct WM factors. Thus, time-constrained elementary activities can be integrated in WM tasks, permitting the assessment of WM in a wider range of populations.

  6. Matter coupling in partially constrained vielbein formulation of massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Felice, Antonio De [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Gümrükçüoğlu, A. Emir [School of Mathematical Sciences, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); Heisenberg, Lavinia [Institute for Theoretical Studies, ETH Zurich,Clausiusstrasse 47, 8092 Zurich (Switzerland); Mukohyama, Shinji [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Kavli Institute for the Physics and Mathematics of the Universe,Todai Institutes for Advanced Study, University of Tokyo (WPI),5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan)

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  7. Matter coupling in partially constrained vielbein formulation of massive gravity

    CERN Document Server

    De Felice, Antonio; Heisenberg, Lavinia; Mukohyama, Shinji

    2015-01-01

    We consider a consistent linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  8. Matter coupling in partially constrained vielbein formulation of massive gravity

    Science.gov (United States)

    De Felice, Antonio; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia; Mukohyama, Shinji

    2016-01-01

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  9. Global recoverable reserve estimation by covariance matching constrained kriging

    Energy Technology Data Exchange (ETDEWEB)

    Tercan, A.E. [Hacettepe University, Ankara (Turkey). Dept. of Mining Engineering

    2004-10-01

    A central problem in mining practice is estimation of global recoverable reserves, i.e., recovered tonnage and mean quality varying with cut-off value over the whole deposit. This article describes the application of covariance matching constrained kriging to the estimation of the global recoverable reserves in a lignite deposit in Turkey. Thickness and calorific value are the variables used in this study. The deposit is divided into 180 panels with 200 m x 200 m size and the mean calorific value of the panels is estimated by covariance matching constrained kriging. Quality tonnage curve is constructed based on the estimated mean values. For comparison, quality tonnage curve from ordinary kriging is also provided.

  10. A second-generation constrained reaction volume shock tube.

    Science.gov (United States)

    Campbell, M F; Tulgestke, A M; Davidson, D F; Hanson, R K

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.

  11. A second-generation constrained reaction volume shock tube

    Science.gov (United States)

    Campbell, M. F.; Tulgestke, A. M.; Davidson, D. F.; Hanson, R. K.

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.

  12. Application of constrained aza-valine analogs for Smac mimicry.

    Science.gov (United States)

    Chingle, Ramesh; Ratni, Sara; Claing, Audrey; Lubell, William D

    2016-05-01

    Constrained azapeptides were designed based on the Ala-Val-Pro-Ile sequence from the second mitochondria-derived activator of caspases (Smac) protein and tested for ability to induce apoptosis in cancer cells. Diels-Alder cyclizations and Alder-ene reactions on azopeptides enabled construction of a set of constrained aza-valine dipeptide building blocks, that were introduced into mimics using effective coupling conditions to acylate bulky semicarbazide residues. Evaluation of azapeptides 7-11 in MCF-7 breast cancer cells indicated aza-cyclohexanylglycyine analog 11 induced cell death more efficiently than the parent tetrapeptide likely by a caspase-9 mediated apoptotic pathway. © 2016 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 106: 235-244, 2016.

  13. Hamiltonian analysis of SO(4,1)-constrained BF theory

    Energy Technology Data Exchange (ETDEWEB)

    Durka, R; Kowalski-Glikman, J, E-mail: rdurka@ift.uni.wroc.p, E-mail: jkowalskiglikman@ift.uni.wroc.p [Institute for Theoretical Physics, University of Wroclaw, Pl. Maxa Borna 9, 50-204 Wroclaw (Poland)

    2010-09-21

    In this paper we discuss the canonical analysis of SO(4,1)-constrained BF theory. The action of this theory contains topological terms appended by a term that breaks the gauge symmetry down to the Lorentz subgroup SO(3,1). The equations of motion of this theory turn out to be the vacuum Einstein equations. By solving the B field equations one finds that the action of this theory contains not only the standard Einstein-Cartan term but also the Holst term proportional to the inverse of the Immirzi parameter, as well as a combination of topological invariants. We show that the structure of the constraints of an SO(4,1)-constrained BF theory is exactly that of gravity in the Holst formulation. We also briefly discuss the quantization of the theory.

  14. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids

    Science.gov (United States)

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-01-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution. PMID:25948565

  15. Origin of Constrained Maximal CP Violation in Flavor Symmetry

    CERN Document Server

    He, Hong-Jian; Xu, Xun-Jie

    2015-01-01

    Current data from neutrino oscillation experiments are in good agreement with $\\delta=-\\pi/2$ and $\\theta_{23} = \\pi/4$. We define the notion of "constrained maximal CP violation" for these features and study their origin in flavor symmetry models. We give various parametrization-independent definitions of constrained maximal CP violation and present a theorem on how it can be generated. This theorem takes advantage of residual symmetries in the neutrino and charged lepton mass matrices, and states that, up to a few exceptions, $\\delta=\\pm\\pi/2$ and $\\theta_{23} = \\pi/4$ are generated when those symmetries are real. The often considered $\\mu$-$\\tau$ reflection symmetry, as well as specific discrete subgroups of $O(3)$, are special case of our theorem.

  16. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  17. Constraining the Charm Yukawa and Higgs-quark Universality

    CERN Document Server

    Perez, Gilad; Stamou, Emmanuel; Tobioka, Kohsaku

    2015-01-01

    We introduce four different types of data-driven analyses with different level of robustness that constrain the size of the Higgs-charm Yukawa coupling: (i) recasting the vector-boson associated, Vh, analyses that search for bottom-pair final state. We use this mode to directly and model independently constrain the Higgs to charm coupling, y_c/y_c^{SM} J/\\psi\\gamma, y_c/y_c^{SM} < 220; (iv) a global fit to the Higgs signal strengths, y_c/y_c^{SM} < 6.2. A comparison with t\\bar{t}h data allows us to show that current data eliminates the possibility that the Higgs couples to quarks in a universal way, as is consistent with the Standard Model (SM) prediction. Finally, we demonstrate how the experimental collaborations can further improve our direct bound by roughly an order of magnitude by charm-tagging, as already used in new physics searches.

  18. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yu-Feng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained.From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively.Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation.The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity.Finally,we decompose the BK hierarchy of evolution equations into x-constrained flows and tn-constrained flows whose adjoint representations and the Lax pairs are given.

  19. Lilith: a tool for constraining new physics from Higgs measurements

    Science.gov (United States)

    Bernon, Jérémy; Dumont, Béranger

    2015-09-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.

  20. Lilith: a tool for constraining new physics from Higgs measurements

    CERN Document Server

    Bernon, Jeremy

    2015-01-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the compatibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths. The results of Lilith can be used to constrain a wide class of new physics scenarios.

  1. Applications of a constrained mechanics methodology in economics

    CERN Document Server

    Janová, Jitka

    2011-01-01

    The paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for the undergraduate physics education. The aim of the paper is: 1. to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even on the undergraduate level and 2. to enable the students to understand deeper the principles and methods routinely used in mechanics by looking at the well known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity dependent constraint using the vakonomic approa...

  2. Quaternionic Kahler Manifolds, Constrained Instantons and the Magic Square: I

    CERN Document Server

    Dasgupta, Keshav; Wissanji, Alisha

    2007-01-01

    The classification of homogeneous quaternionic manifolds has been done by Alekseevskii, Wolf et al using transitive solvable group of isometries. These manifolds are not generically symmetric, but there is a subset of quaternionic manifolds that are symmetric and Einstein. A further subset of these manifolds are the magic square manifolds. We show that all the symmetric quaternionic manifolds including the magic square can be succinctly classified by constrained instantons. These instantons are mostly semilocal, and their constructions for the magic square can be done from the corresponding Seiberg-Witten curves for certain N = 2 gauge theories that are in general not asymptotically free. Using these, we give possible constructions, such as the classical moduli space metrics, of constrained instantons with exceptional global symmetries. We also discuss the possibility of realising the Kahler manifolds in the magic square using other solitonic configurations in the theory, and point out an interesting new sequ...

  3. A Note on k-Limited Maximum Base

    Institute of Scientific and Technical Information of China (English)

    Yang Ruishun; Yang Xiaowei

    2006-01-01

    The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.

  4. Search for passing-through-walls neutrons constrains hidden braneworlds

    Directory of Open Access Journals (Sweden)

    Michaël Sarrazin

    2016-07-01

    Full Text Available In many theoretical frameworks our visible world is a 3-brane, embedded in a multidimensional bulk, possibly coexisting with hidden braneworlds. Some works have also shown that matter swapping between braneworlds can occur. Here we report the results of an experiment – at the Institut Laue-Langevin (Grenoble, France – designed to detect thermal neutron swapping to and from another braneworld, thus constraining the probability p2 of such an event. The limit, p87 in Planck length units.

  5. Integrating factors and conservation theorems of constrained Birkhoffian systems

    Institute of Scientific and Technical Information of China (English)

    Qiao Yong-Fen; Zhao Shu-Hong; Li Ren-Jie

    2006-01-01

    In this paper the conservation theorems of the constrained Birkhoffian systems are studied by using the method of integrating factors. The differential equations of motion of the system are written. The definition of integrating factors is given for the system. The necessary conditions for the existence of the conserved quantity for the system are studied.The conservation theorem and its inverse for the system are established. Finally, an example is given to illustrate the application of the results.

  6. Lifetime of the solar nebula constrained by meteorite paleomagnetism

    Science.gov (United States)

    Wang, Huapei; Weiss, Benjamin P.; Bai, Xue-Ning; Downey, Brynna G.; Wang, Jun; Wang, Jiajun; Suavet, Clément; Fu, Roger R.; Zucolotto, Maria E.

    2017-02-01

    A key stage in planet formation is the evolution of a gaseous and magnetized solar nebula. However, the lifetime of the nebular magnetic field and nebula are poorly constrained. We present paleomagnetic analyses of volcanic angrites demonstrating that they formed in a near-zero magnetic field (core dynamo on the angrite parent body did not initiate until about 4 to 11 million years after solar system formation.

  7. Anti-B-B Mixing Constrains Topcolor-Assisted Technicolor

    Energy Technology Data Exchange (ETDEWEB)

    Burdman, Gustavo; Lane, Kenneth; Rador, Tonguc

    2000-12-06

    We argue that extended technicolor augmented with topcolor requires that all mixing between the third and the first two quark generations resides in the mixing matrix of left-handed down quarks. Then, the anti-B_d--B_d mixing that occurs in topcolor models constrains the coloron and Z' boson masses to be greater than about 5 TeV. This implies fine tuning of the topcolor couplings to better than 1percent.

  8. New Quasidilaton theory in Partially Constrained Vielbein Formalism

    CERN Document Server

    De Felice, Antonio; Heisenberg, Lavinia; Mukohyama, Shinji; Tanahashi, Norihiro

    2016-01-01

    In this work we study the partially constrained vielbein formulation of the new quasidilaton theory of massive gravity which couples to both physical and fiducial metrics simultaneously via a composite effective metric. This formalism improves the new quasidilaton model since the Boulware-Deser ghost is removed fully non-linearly at all scales. This also yields crucial implications in the cosmological applications. We derive the governing cosmological background evolution and study the stability of the attractor solution.

  9. Tulczyjew triples in the constrained dynamics of strings

    Science.gov (United States)

    Grabowski, J.; Grabowska, K.; Urbański, P.

    2016-09-01

    We show that there exists a natural Tulczyjew triple in the dynamics of objects for which the standard (kinematic) configuration space TM is replaced with ∧n TM . In this framework, which is completely covariant, we derive geometrically phase equations, as well as Euler-Lagrange equations, including nonholonomic constraints into the picture. Dynamics of strings and a constrained Plateau problem in statics are particular cases of this framework.

  10. Asynchronous parallel generating set search for linearly-constrained optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G.; Griffin, Joshua; Lewis, Robert Michael

    2007-04-01

    We describe an asynchronous parallel derivative-free algorithm for linearly-constrained optimization. Generating set search (GSS) is the basis of ourmethod. At each iteration, a GSS algorithm computes a set of search directionsand corresponding trial points and then evaluates the objective function valueat each trial point. Asynchronous versions of the algorithm have been developedin the unconstrained and bound-constrained cases which allow the iterations tocontinue (and new trial points to be generated and evaluated) as soon as anyother trial point completes. This enables better utilization of parallel resourcesand a reduction in overall runtime, especially for problems where the objec-tive function takes minutes or hours to compute. For linearly-constrained GSS,the convergence theory requires that the set of search directions conform to the3 nearby boundary. The complexity of developing the asynchronous algorithm forthe linearly-constrained case has to do with maintaining a suitable set of searchdirections as the search progresses and is the focus of this research. We describeour implementation in detail, including how to avoid function evaluations bycaching function values and using approximate look-ups. We test our imple-mentation on every CUTEr test problem with general linear constraints and upto 1000 variables. Without tuning to individual problems, our implementationwas able to solve 95% of the test problems with 10 or fewer variables, 75%of the problems with 11-100 variables, and nearly half of the problems with100-1000 variables. To the best of our knowledge, these are the best resultsthat have ever been achieved with a derivative-free method. Our asynchronousparallel implementation is freely available as part of the APPSPACK software.4

  11. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  12. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    Science.gov (United States)

    2006-05-04

    April 30, 2006 DRAFT 9 D. Scope and Outline of the Paper The main focus of the paper is on the analysis of a heuristic algorithm which effectively...CCRP Algorithms In this section, we present a generic description of the Capacity Constrained Route Planner (CCRP). CCRP is a heuristic algorithm which...qualifies to be a candidate algorithm. E. Solution Quality of CCRP Since CCRP is a heuristic algorithm , it does not produce optimal solutions for all

  13. Effects of voluntary constraining of thoracic displacement during hypercapnia.

    Science.gov (United States)

    Chonan, T; Mulholland, M B; Cherniack, N S; Altose, M D

    1987-11-01

    The study evaluated the interrelationships between the extent of thoracic movements and respiratory chemical drive in shaping the intensity of the sensation of dyspnea. Normal subjects rated their sensations of dyspnea as PCO2 increased during free rebreathing and during rebreathing while ventilation was voluntarily maintained at a constant base-line level. Another trial evaluated the effects on the intensity of dyspnea, of voluntary reduction in the level of ventilation while PCO2 was held constant. During rebreathing, there was a power function relationship between changes in PCO2 and the intensity of dyspnea. At a given PCO2, constraining tidal volume and breathing frequency to the prerebreathing base-line level resulted in an increase in dyspnea. The fractional differences in the intensity of dyspnea between free and constrained rebreathing were independent of PCO2. However, the absolute difference in the intensity of dyspnea between free and constrained rebreathing enlarged with increasing hypercapnia. At PCO2 of 50 Torr, this difference correlated significantly with the increase in both minute ventilation (r = 0.675) and tidal volume (r = 0.757) above the base line during free rebreathing. Similarly, during steady-state hypercapnia at 50 Torr PCO2, the intensity of dyspnea increased progressively as ventilation was voluntarily reduced from the spontaneously adopted free-breathing level. These results indicate that dyspnea increases with the level of respiratory chemical drive but that the intensity of the sensation is further accentuated when ventilation is constrained below that demanded by the level of chemical drive. This may be explained by a loss of inhibitory feedback from lung or chest wall mechanoreceptors acting on brain stem and/or cortical centers.

  14. Receding horizon H∞ control for constrained time-delay systems

    Institute of Scientific and Technical Information of China (English)

    Lu Mei; Jin Chengbo; Shao Huihe

    2009-01-01

    A receding horizon H∞ control algorithm is presented for linear discrete time-delay system in the presence of constrained input and disturbances. Disturbance attenuation level is optimized at each time instant, and the receding optimization problem includes several linear matrix inequality constraints. When the convex hull is applied to denote the saturating input, the algorithm has better performance. The numerical example can verify this result.

  15. A Riccati approach for constrained linear quadratic optimal control

    Science.gov (United States)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  16. From global fits of neutrino data to constrained sequential dominance

    CERN Document Server

    Björkeroth, Fredrik

    2014-01-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We perform a global analysis on a class of CSD($n$) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  17. Multivariable controller for discrete stochastic amplitude-constrained systems

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1983-04-01

    Full Text Available A sub-optimal multivariable controller for discrete stochastic amplitude-constrained systems is presented. In the approach the regulator structure is restricted to the class of linear saturated feedback laws. The stationary covariances of the controlled system are evaluated by approximating the stationary probability distribution of the state by a gaussian distribution. An algorithm for minimizing a quadratic loss function is given, and examples are presented to illustrate the performance of the sub-optimal controller.

  18. SCOR: Software-defined Constrained Optimal Routing Platform for SDN

    OpenAIRE

    Layeghy, Siamak; Pakzad, Farzaneh; Portmann, Marius

    2016-01-01

    A Software-defined Constrained Optimal Routing (SCOR) platform is introduced as a Northbound interface in SDN architecture. It is based on constraint programming techniques and is implemented in MiniZinc modelling language. Using constraint programming techniques in this Northbound interface has created an efficient tool for implementing complex Quality of Service routing applications in a few lines of code. The code includes only the problem statement and the solution is found by a general s...

  19. Dust Continuum Observations of Protostars: Constraining Properties with Simulations

    CERN Document Server

    Offner, Stella S R

    2012-01-01

    The properties of unresolved protostars and their local environment (e.g., disk, envelope and outflow characteristics) are frequently inferred from spectral energy distributions (SEDs) through comparison with idealized model SEDs. However, if it is not possible to image a source and its environment directly, it is difficult to constrain and evaluate the accuracy of these derived properties. In this proceeding, I present a brief overview of the reliability of SED modeling by analyzing dust continuum synthetic observations of realistic simulations.

  20. Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty

    Directory of Open Access Journals (Sweden)

    Ke-wei Ding

    2014-01-01

    Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.

  1. Search for passing-through-walls neutrons constrains hidden braneworlds

    Science.gov (United States)

    Sarrazin, Michaël; Pignol, Guillaume; Lamblin, Jacob; Pinon, Jonhathan; Méplan, Olivier; Terwagne, Guy; Debarsy, Paul-Louis; Petit, Fabrice; Nesvizhevsky, Valery V.

    2016-07-01

    In many theoretical frameworks our visible world is a 3-brane, embedded in a multidimensional bulk, possibly coexisting with hidden braneworlds. Some works have also shown that matter swapping between braneworlds can occur. Here we report the results of an experiment - at the Institut Laue-Langevin (Grenoble, France) - designed to detect thermal neutron swapping to and from another braneworld, thus constraining the probability p2 of such an event. The limit, p 87 in Planck length units.

  2. Nonlinear algebraic multigrid for constrained solid mechanics problems using Trilinos

    OpenAIRE

    Gee, M.W.; R. S. Tuminaro

    2012-01-01

    The application of the finite element method to nonlinear solid mechanics problems results in the neccessity to repeatedly solve a large nonlinear set of equations. In this paper we limit ourself to problems arising in constrained solid mechanics problems. It is common to apply some variant of Newton?s method or a Newton? Krylov method to such problems. Often, an analytic Jacobian matrix is formed and used in the above mentioned methods. However, if no analytic Jacobian is given, Newton metho...

  3. Moving forward to constrain the shear viscosity of QCD matter

    OpenAIRE

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Bjoern

    2015-01-01

    We demonstrate that measurements of rapidity differential anisotropic flow in heavy ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio {\\eta}/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from RHIC, we find evidence for a small {\\eta}/s $\\approx$ 0.04 in the QCD cross-over region and a strong temperature dependence in the hadronic phase. A temperature independent {\\eta}/s is disfavored by the data....

  4. Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation

    CERN Document Server

    Cordero-Carrión, Isabel; Ibáñez, José María

    2010-01-01

    This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.

  5. Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Cordero-Carrion, Isabel; Ibanez, Jose MarIa [Departamento de Astronomia y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain); Cerda-Duran, Pablo, E-mail: isabel.cordero@uv.e, E-mail: cerda@mpa-garching.mpg.d, E-mail: jose.m.ibanez@uv.e [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Strasse 1, D-85741 Garching (Germany)

    2010-05-01

    This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.

  6. LINEAR SYSTEMS ASSOCIATED WITH NUMERICAL METHODS FOR CONSTRAINED OPITMIZATION

    Institute of Scientific and Technical Information of China (English)

    Y. Yuan

    2003-01-01

    Linear systems associated with numerical methods for constrained optimization arediscussed in this paper. It is shown that the corresponding subproblems arise in most well-known methods, no matter line search methods or trust region methods for constrainedoptimization can be expressed as similar systems of linear equations. All these linearsystems can be viewed as some kinds of approximation to the linear system derived by theLagrange-Newton method. Some properties of these linear systems are analyzed.

  7. An Interval Maximum Entropy Method for Quadratic Programming Problem

    Institute of Scientific and Technical Information of China (English)

    RUI Wen-juan; CAO De-xin; SONG Xie-wu

    2005-01-01

    With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.

  8. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    Directory of Open Access Journals (Sweden)

    Maxim Kramar

    2016-08-01

    Full Text Available Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131 to retrieve and analyze the three-dimensional (3D coronal electron density in the range of heights from $1.5$ to $4 R_odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 AA band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $sim 2.5 R_odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.

  9. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    Science.gov (United States)

    Kramar, Maxim; Airapetian, Vladimir; Lin, Haosheng

    2016-08-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from 1.5 to 4 R_⊙ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 Å band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below ˜ 2.5 R_⊙. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.

  10. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    CERN Document Server

    Kramar, Maxim; Lin, Haosheng

    2016-01-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from $1.5$ to $4\\ \\mathrm{R}_\\odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 \\AA \\ band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $\\sim 2.5 \\ \\mathrm{R}_\\odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the a...

  11. Constraining the volatile fraction of planets from transit observations

    CERN Document Server

    Alibert, Yann

    2016-01-01

    The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount of volatiles in the planetary interior, and consequently the amount of gas (defined in this paper to be only H and He) that the planet harbors. We show for low-mass gas-poor planets that are located close to their central star that assuming evaporation has efficiently removed the entire gas envelope, it is possible to constrain the volatile fraction of close-in transiting planets. We illustrate this method on the example of 55 Cnc e and show that under the assumption of the absence of gas, the measured mass and radius im...

  12. AN ADAPTIVE TRUST REGION METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    ZHANG Juliang; ZHANG Xiangsun; ZHUO Xinjian

    2003-01-01

    In this paper, a trust region method for equality constrained optimization based on nondifferentiable exact penalty is proposed. In this algorithm, the trail step is characterized by computation of its normal component being separated from computation of its tangential component, i.e., only the tangential component of the trail step is constrained by trust radius while the normal component and trail step itself have no constraints. The other main characteristic of the algorithm is the decision of trust region radius. Here, the decision of trust region radius uses the information of the gradient of objective function and reduced Hessian. However, Maratos effect will occur when we use the nondifferentiable exact penalty function as the merit function. In order to obtain the superlinear convergence of the algorithm, we use the twice order correction technique. Because of the speciality of the adaptive trust region method, we use twice order correction when p = 0 (the definition is as in Section 2) and this is different from the traditional trust region methods for equality constrained optimization. So the computation of the algorithm in this paper is reduced. What is more, we can prove that the algorithm is globally and superlinearly convergent.

  13. Lifespan theorem for simples constrained surface diffusion flows

    CERN Document Server

    Wheeler, Glen

    2012-01-01

    We consider closed immersed hypersurfaces in $\\R^3$ and $\\R^4$ evolving by a special class of constrained surface diffusion flows. This class of constrained flows includes the classical surface diffusion flow. In this paper we present a Lifespan Theorem for these flows, which gives a positive lower bound on the time for which a smooth solution exists, and a small upper bound on the total curvature during this time. The hypothesis of the theorem is that the surface is not already singular in terms of concentration of curvature. This turns out to be a deep property of the initial manifold, as the lower bound on maximal time obtained depends precisely upon the concentration of curvature of the initial manifold in $L^2$ for $M^2$ immersed in $R^3$ and additionally on the concentration in $L^3$ for $M^3$ immersed in $R^4$. This is stronger than a previous result on a different class of constrained surface diffusion flows, as here we obtain an improved lower bound on maximal time, a better estimate during this peri...

  14. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  15. Performance enhancement for GPS positioning using constrained Kalman filtering

    Science.gov (United States)

    Guo, Fei; Zhang, Xiaohong; Wang, Fuhong

    2015-08-01

    Over the past decades Kalman filtering (KF) algorithms have been extensively investigated and applied in the area of kinematic positioning. In the application of KF in kinematic precise point positioning (PPP), it is often the case where some known functional or theoretical relations exist among the unknown state parameters, which can be and should be made use of to enhance the performance of kinematic PPP, especially in an urban and forest environment. The central task of this paper is to effectively blend the commonly used GNSS data and internal/external additional constrained information to generate an optimal PPP solution. This paper first investigates the basic algorithm of constrained Kalman filtering. Then two types of PPP model with speed constraints and trajectory constraints, respectively, are proposed. Further validation tests based on a variety of situations show that the positioning performances (positioning accuracy, reliability and continuity) from the constrained Kalman filter are significantly superior to those from the conventional Kalman filter, particularly under extremely poor observation conditions.

  16. A Constrained CA Model for Planning Simulation Incorporating Institutional Constraints

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In recent years,it is prevailing to simulate urban growth by means of cellular automata (CA in short) modeling,which is based on selforganizing theories and different from the system dynamic modeling.Since the urban system is definitely complex,the CA models applied in urban growth simulation should take into consideration not only the neighborhood influence,but also other factors influencing urban development.We bring forward the term of complex constrained CA (CC-CA in short) model,which integrates the constrained conditions of neighborhood,macro socio-economy,space and institution.Particularly,the constrained construction zoning,as one institutional constraint,is considered in the CC-CA modeling.In the paper,the conceptual CC-CA model is introduced together with the transition rules.Based on the CC-CA model for Beijing,we discuss the complex constraints to the urban development of,and we show how to set institutional constraints in planning scenario to control the urban growth pattern of Beijing.

  17. Constraining anisotropic models of early Universe with WMAP9 data

    CERN Document Server

    Ramazanov, Sabir

    2013-01-01

    We constrain several models of the early Universe that predict statistical anisotropy of the CMB sky. We make use of WMAP9 maps deconvolved with beam asymmetries. As compared to previous releases of WMAP data, they do not exhibit the anomalously large quadrupole of the statistical anisotropy. This allows to strengthen limits on parameters of models established earlier in literature. In particular, the amplitude of the special quadrupole, whose direction is aligned with ecliptic poles, is now constrained as g_* =0.002 \\pm 0.041 at 95% CL (\\pm 0.020 at 68% CL). The upper limit is obtained on the total number of e-folds in anisotropic inflation with the Maxwellian term non-minimally coupled to the inflaton, namely N_{tot} constrain models of (pseudo)-Conformal Universe. The strongest constraint is obtained for spectator scenarios involving a long stage of subhorizon evolution after conformal rolling, which reads h^2 < 0.006 at 95% CL, in terms ...

  18. Applications of a constrained mechanics methodology in economics

    Energy Technology Data Exchange (ETDEWEB)

    Janova, Jitka, E-mail: janova@mendelu.cz [Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemedelska 1, 613 00 Brno (Czech Republic)

    2011-11-15

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.

  19. Counterexamples to convergence theorem of maximum-entropy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生

    2003-01-01

    In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.

  20. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap electrical resistance and heat conductivity of the considered materials.......The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost...

  1. Integer Programming Model for Maximum Clique in Graph

    Institute of Scientific and Technical Information of China (English)

    YUAN Xi-bo; YANG You; ZENG Xin-hai

    2005-01-01

    The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.

  2. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  3. Constrained statistical inference: sample-size tables for ANOVA and regression.

    Science.gov (United States)

    Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves

    2014-01-01

    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30-50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0).

  4. Does Aspartic Acid Racemization Constrain the Depth Limit of the Subsurface Biosphere?

    Science.gov (United States)

    Onstott, T C.; Magnabosco, C.; Aubrey, A. D.; Burton, A. S.; Dworkin, J. P.; Elsila, J. E.; Grunsfeld, S.; Cao, B. H.; Hein, J. E.; Glavin, D. P.; Kieft, T. L.; Silver, B. J.; Phelps, T. J.; Heerden, E. Van; Opperman, D. J.; Bada, J. L.

    2013-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of approximately 89 years for 1 km depth and 27 C and 1-2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  5. Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?

    Energy Technology Data Exchange (ETDEWEB)

    Onstott, T. C. [Princeton University; Aubrey, A.D. [Jet Propulsion Laboratory, Pasadena, CA; Kieft, T L [New Mexico Institute of Mining and Technology; Silver, B J [Jet Propulsion Laboratory, Pasadena, CA; Phelps, Tommy Joe [ORNL; Van Heerden, E. [University of the Free State; Opperman, D. J. [University of the Free State; Bada, J L. [Geosciences Research Division, Scripps Instition of Oceanography, Univesity of California San Diego,

    2014-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 C and 1 2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  6. Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?

    Science.gov (United States)

    Onstott, T C; Magnabosco, C; Aubrey, A D; Burton, A S; Dworkin, J P; Elsila, J E; Grunsfeld, S; Cao, B H; Hein, J E; Glavin, D P; Kieft, T L; Silver, B J; Phelps, T J; van Heerden, E; Opperman, D J; Bada, J L

    2014-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 °C and 1-2 years for 3 km depth and 54 °C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 °C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  7. Use of Traffic Intent Information by Autonomous Aircraft in Constrained Operations

    Science.gov (United States)

    Wing, David J.; Barmore, Bryan E.; Krishnamurthy, Karthik

    2002-01-01

    This paper presents findings of a research study designed to provide insight into the issue of intent information exchange in constrained en-route air-traffic operations and its effect on pilot decision-making and flight performance. The piloted simulation was conducted in the Air Traffic Operations Laboratory at the NASA Langley Research Center. Two operational modes for autonomous flight management were compared under conditions of low and high operational complexity (traffic and airspace hazard density). The tactical mode was characterized primarily by the use of traffic state data for conflict detection and resolution and a manual approach to meeting operational constraints. The strategic mode involved the combined use of traffic state and intent information, provided the pilot an additional level of alerting, and allowed an automated approach to meeting operational constraints. Operational constraints applied in the experiment included separation assurance, schedule adherence, airspace hazard avoidance, flight efficiency, and passenger comfort. The strategic operational mode was found to be effective in reducing unnecessary maneuvering in conflict situations where the intruder's intended maneuvers would resolve the conflict. Conditions of high operational complexity and vertical maneuvering resulted in increased proliferation of conflicts, but both operational modes exhibited characteristics of stability based on observed conflict proliferation rates of less than 30 percent. Scenario case studies illustrated the need for maneuver flight restrictions to prevent the creation of new conflicts through maneuvering and the need for an improved user interface design that appropriately focuses the pilot's attention on conflict prevention information. Pilot real-time assessment of maximum workload indicated minimal sensitivity to operational complexity, providing further evidence that pilot workload is not the limiting factor for feasibility of an en-route distributed

  8. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  9. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...

  10. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  11. Maximum Atmospheric Entry Angle for Specified Retrofire Impulse

    Directory of Open Access Journals (Sweden)

    T. N. Srivastava

    1969-07-01

    Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.

  12. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...

  13. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...

  14. 78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans

    Science.gov (United States)

    2013-03-04

    ... September 30, 2008 (73 FR 56754-56756). The proposed rule included provisions tying maximum rates to widely... York Prime rate plus 4 percent. The maximums should be the same for all FOs, regardless of size... Review,'' and Executive Order 13563, ``Improving Regulation and Regulatory Review,'' direct agencies...

  15. 48 CFR 436.575 - Maximum workweek-construction schedule.

    Science.gov (United States)

    2010-10-01

    ...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...

  16. 49 CFR 174.86 - Maximum allowable operating speed.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...

  17. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...

  18. Distribution of maximum loss of fractional Brownian motion with drift

    OpenAIRE

    Çağlar, Mine; Vardar-Acar, Ceren

    2013-01-01

    In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.

  19. 30 CFR 56.19066 - Maximum riders in a conveyance.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...

  20. 30 CFR 57.19066 - Maximum riders in a conveyance.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...

  1. 46 CFR 151.45-6 - Maximum amount of cargo.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...

  2. 5 CFR 550.105 - Biweekly maximum earnings limitation.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...

  3. 5 CFR 550.106 - Annual maximum earnings limitation.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...

  4. Sensitivity of palaeotidal models of the northwest European shelf seas to glacial isostatic adjustment since the Last Glacial Maximum

    Science.gov (United States)

    Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto

    2016-11-01

    The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.

  5. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  6. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  7. 21 CFR 888.3790 - Wrist joint metal constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint metal constrained cemented prosthesis... constrained cemented prosthesis. (a) Identification. A wrist joint metal constrained cemented prosthesis is a... as cobalt-chromium-molybdenum, and is limited to those prostheses intended for use with bone...

  8. Constrained model predictive control, state estimation and coordination

    Science.gov (United States)

    Yan, Jun

    In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to

  9. Constrained dynamics approach for motion synchronization and consensus

    Science.gov (United States)

    Bhatia, Divya

    In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators

  10. Fast alternating projection methods for constrained tomographic reconstruction.

    Science.gov (United States)

    Liu, Li; Han, Yongxin; Jin, Mingwu

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.

  11. Pattern recognition constrains mantle properties, past and present

    Science.gov (United States)

    Atkins, S.; Rozel, A. B.; Valentine, A. P.; Tackley, P.; Trampert, J.

    2015-12-01

    Understanding and modelling mantle convection requires knowledge of many mantle properties, such as viscosity, chemical structure and thermal proerties such as radiogenic heating rate. However, many of these parameters are only poorly constrained. We demonstrate a new method for inverting present day Earth observations for mantle properties. We use neural networks to represent the posterior probability density functions of many different mantle properties given the present structure of the mantle. We construct these probability density functions by sampling a wide range of possible mantle properties and running forward simulations, using the convection code StagYY. Our approach is particularly powerful because of its flexibility. Our samples are selected in the prior space, rather than being targeted towards a particular observation, as would normally be the case for probabilistic inversion. This means that the same suite of simulations can be used for inversions using a wide range of geophysical observations without the need to resample. Our method is probabilistic and non-linear and is therefore compatible with non-linear convection, avoiding some of the limitations associated with other methods for inverting mantle flow. This allows us to consider the entire history of the mantle. We also need relatively few samples for our inversion, making our approach computationally tractable when considering long periods of mantle history. Using the present thermal and density structure of the mantle, we can constrain rheological and compositional parameters such as viscosity and yield stress. We can also use the present day mantle structure to make inferences about the initial conditions for convection 4.5 Gyr ago. We can constrain initial mantle conditions including the initial concentration of heat producing elements in the mantle and the initial thickness of primordial material at the CMB. Currently we use density and temperature structure for our inversions, but we can

  12. Constrained Spectral Conditioning for spatial sound level estimation

    Science.gov (United States)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2016-11-01

    Microphone arrays are utilized in aeroacoustic testing to spatially map the sound emitted from an article under study. Whereas a single microphone allows only the total sound level to be estimated at the measurement location, an array permits differentiation between the contributions of distinct components. The accuracy of these spatial sound estimates produced by post-processing the array outputs is continuously being improved. One way of increasing the estimation accuracy is to filter the array outputs before they become inputs to a post-processor. This work presents a constrained method of linear filtering for microphone arrays which minimizes the total signal present on the array channels while preserving the signal from a targeted spatial location. Thus, each single-channel, filtered output for a given targeted location estimates only the signal from that location, even when multiple and/or distributed sources have been measured simultaneously. The method is based on Conditioned Spectral Analysis and modifies the Wiener-Hopf equation in a manner similar to the Generalized Sidelobe Canceller. This modified form of Conditioned Spectral Analysis is embedded within an iterative loop and termed Constrained Spectral Conditioning. Linear constraints are derived which prevent the cancellation of targeted signal due to random statistical error as well as location error in the sensor and/or source positions. The increased spatial mapping accuracy of Constrained Spectral Conditioning is shown for a simulated dataset of point sources which vary in strength. An experimental point source is used to validate the efficacy of the constraints which yield preservation of the targeted signal at the expense of reduced filtering ability. The beamforming results of a cold, supersonic jet demonstrate the qualitative and quantitative improvement obtained when using this technique to map a spatially-distributed, complex, and possibly coherent sound source.

  13. Approximation algorithms for curvature-constrained shortest paths

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hongyan; Agarwal, P.K. [Duke Univ., Durham, NC (United States)

    1996-12-31

    Let B be a point robot in the plane, whose path is constrained to have curvature of at most 1, and let {Omega} be a set of polygonal obstacles with n vertices. We study the collision-free, optimal path-planning problem for B. Given a parameter {epsilon}, we present an O((n{sup 2}/{epsilon}{sup 2}) log n)-time algorithm for computing a collision-free, curvature-constrained path between two given positions, whose length is at most (1 + {epsilon}) times the length of an optimal robust path (a path is robust if it remains collision-free even if certain positions on the path are perturbed). Our algorithm thus runs significantly faster than the previously best known algorithm by Jacobs and Canny whose running time is O((n+L/{epsilon}){sup 2} + n{sup 2} (n+1/{epsilon}) log n), where L is the total edge length of the obstacles. More importantly, the running time of our algorithm does not depend on the size of obstacles. The path returned by this algorithm is not necessarily robust. We present an O((n/{epsilon}){sup 2.5} log n)-time algorithm that returns a robust path whose length is at most (1 + {epsilon}) times the length of an optimal robust path. We also give a stronger characterization of curvature-constrained shortest paths, which, apart from being crucial for our algorithm, is interesting in its own right. Roughly speaking, we prove that, except in some special cases, a shortest path touches obstacles only at points that have a visible vertex nearby.

  14. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...

  15. Vibration Suppression Analysis for Supporter with Constrained Layer Damping

    Institute of Scientific and Technical Information of China (English)

    杜华军; 邹振祝; 黄文虎

    2004-01-01

    By analyzing the correlation between modal calculations and modal experiments of a typical supporter, an effective finite element analysis( FEA)model of the actual aerospace supporter is created. According to the analysis of constrained viscoelastic damping, the strategies of PVC have been worked out, and the correlation between modal calculations and modal experiments of the supporter has also been computed, and then, an experiment has been designed based on the calculation results. The results of experiments verify that the PVC strategy can effectively suppress vibration.

  16. New quasidilaton theory in partially constrained vielbein formalism

    Energy Technology Data Exchange (ETDEWEB)

    Felice, Antonio De [Center for Gravitational Physics, Yukawa Institute for Theoretical Physics,Kyoto University,606-8502, Kyoto (Japan); Gümrükçüoğlu, A. Emir [Theoretical Physics Group, Blackett Laboratory, Imperial College London,South Kensington Campus, London, SW7 2AZ (United Kingdom); Heisenberg, Lavinia [Institute for Theoretical Studies, ETH Zurich,Clausiusstrasse 47, 8092 Zurich (Switzerland); Mukohyama, Shinji [Center for Gravitational Physics, Yukawa Institute for Theoretical Physics,Kyoto University,606-8502, Kyoto (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI),UTIAS, The University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Tanahashi, Norihiro [Department of Applied Mathematics and Theoretical Physics,Centre for Mathematical Sciences, University of Cambridge,Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2016-05-25

    In this work we study the partially constrained vielbein formulation of the new quasidilaton theory of massive gravity, where the quasidilaton field couples to both physical and fiducial metrics simultaneously via a composite effective metric and Lorentz violation is introduced by a constraint on the vielbein. This formalism improves the new quasidilaton model since the Boulware-Deser ghost is removed fully non-linearly at all scales. This also yields crucial implications in the cosmological applications. We derive the governing cosmological background evolution and study the stability of the attractor solution.

  17. CONSTRAINING THE DETERMINATION OF THE STAR FORMATION HISTORY OF GALAXIES

    Directory of Open Access Journals (Sweden)

    G. Magris

    2009-01-01

    Full Text Available We explore the ability of two di erent algorithms, GASPEX and DinBas2D, to derive the Star Formation History from a galaxy spectrum. The former is a non-parametric method which derives the galaxy mass fraction formed in a pre-selected set of epochs. The second is a new approach that nds the best combination of age and mass fraction of two simple stellar populations that ts the target spectrum. In order to constrain the advantages and limitations of this novel method, we apply it to simulated galaxy spectra that cover the Hubble sequence.

  18. New quasidilaton theory in partially constrained vielbein formalism

    Science.gov (United States)

    De Felice, Antonio; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia; Mukohyama, Shinji; Tanahashi, Norihiro

    2016-05-01

    In this work we study the partially constrained vielbein formulation of the new quasidilaton theory of massive gravity, where the quasidilaton field couples to both physical and fiducial metrics simultaneously via a composite effective metric and Lorentz violation is introduced by a constraint on the vielbein. This formalism improves the new quasidilaton model since the Boulware-Deser ghost is removed fully non-linearly at all scales. This also yields crucial implications in the cosmological applications. We derive the governing cosmological background evolution and study the stability of the attractor solution.

  19. Constraining interacting dark energy models with latest cosmological observations

    CERN Document Server

    Xia, Dong-Mei

    2016-01-01

    The local measurement of $H_0$ is in tension with the prediction of $\\Lambda$CDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, $H(z)$ and $H_0$ to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The $H_0$ tension can be moderately alleviated, but not totally released.

  20. MERIT FUNCTION AND GLOBAL ALGORITHM FOR BOX CONSTRAINED VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    张立平; 高自友; 赖炎连

    2002-01-01

    The authors consider optimization methods for box constrained variational inequalities. First, the authors study the KKT-conditions problem based on the original problem. A merit function for the KKT-conditions problem is proposed, and some desirable properties of the merit function are obtained. Through the merit function, the original problem is reformulated as minimization with simple constraints. Then, the authors show that any stationary point of the optimization problem is a solution of the original problem. Finally, a descent algorithm is presented for the optimization problem, and global convergence is shown.