WorldWideScience

Sample records for articulatorily constrained maximum

  1. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  2. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  3. Resource-constrained maximum network throughput on space networks

    Institute of Scientific and Technical Information of China (English)

    Yanling Xing; Ning Ge; Youzheng Wang

    2015-01-01

    This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.

  4. Exploring the Constrained Maximum Edge-weight Connected Graph Problem

    Institute of Scientific and Technical Information of China (English)

    Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen

    2009-01-01

    Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.

  5. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Science.gov (United States)

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987

  6. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  7. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    Science.gov (United States)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  8. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  9. Modeling words with subword units in an articulatorily constrained speech recognition algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1997-11-20

    The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.

  10. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    Science.gov (United States)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  11. A practical computational framework for the multidimensional moment-constrained maximum entropy principle

    Science.gov (United States)

    Abramov, Rafail

    2006-01-01

    The maximum entropy principle is a versatile tool for evaluating smooth approximations of probability density functions with a least bias beyond given constraints. In particular, the moment-based constraints are often a common prior information about a statistical state in various areas of science, including that of a forecast ensemble or a climate in atmospheric science. With that in mind, here we present a unified computational framework for an arbitrary number of phase space dimensions and moment constraints for both Shannon and relative entropies, together with a practical usable convex optimization algorithm based on the Newton method with an additional preconditioning and robust numerical integration routine. This optimization algorithm has already been used in three studies of predictability, and so far was found to be capable of producing reliable results in one- and two-dimensional phase spaces with moment constraints of up to order 4. The current work extensively references those earlier studies as practical examples of the applicability of the algorithm developed below.

  12. Constraining the Last Glacial Maximum climate by data-model (iLOVECLIM) comparison using oxygen stable isotopes

    Science.gov (United States)

    Caley, T.; Roche, D. M.; Waelbroeck, C.; Michel, E.

    2014-01-01

    We use the fully coupled atmosphere-ocean three-dimensional model of intermediate complexity iLOVECLIM to simulate the climate and oxygen stable isotopic signal during the Last Glacial Maximum (LGM, 21 000 yr). By using a model that is able to explicitly simulate the sensor (δ18O), results can be directly compared with data from climatic archives in the different realms. Our results indicate that iLOVECLIM reproduces well the main feature of the LGM climate in the atmospheric and oceanic components. The annual mean δ18O in precipitation shows more depleted values in the northern and southern high latitudes during the LGM. The model reproduces very well the spatial gradient observed in ice core records over the Greenland ice-sheet. We observe a general pattern toward more enriched values for continental calcite δ18O in the model at the LGM, in agreement with speleothem data. This can be explained by both a general atmospheric cooling in the tropical and subtropical regions and a reduction in precipitation as confirmed by reconstruction derived from pollens and plant macrofossils. Data-model comparison for sea surface temperature indicates that iLOVECLIM is capable to satisfyingly simulate the change in oceanic surface conditions between the LGM and present. Our data-model comparison for calcite δ18O allows investigating the large discrepancies with respect to glacial temperatures recorded by different microfossil proxies in the North Atlantic region. The results argue for a trong mean annual cooling between the LGM and present (> 6°C), supporting the foraminifera transfer function reconstruction but in disagreement with alkenones and dinocyst reconstructions. The data-model comparison also reveals that large positive calcite δ18O anomaly in the Southern Ocean may be explained by an important cooling, although the driver of this pattern is unclear. We deduce a large positive δ18Osw anomaly for the north Indian Ocean that contrasts with a large negative δ18Osw

  13. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario.

    Science.gov (United States)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-07-01

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. PMID:24889372

  14. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario

    International Nuclear Information System (INIS)

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)

  15. Constrained wormholes

    International Nuclear Information System (INIS)

    The large wormhole problem in Coleman's theory of the cosmological constant is presented in the framework of constrained wormholes. We use semi-classical methods, similar to those used to study constrained instantons in quantum field theory. A scalar field theory serves as a toy model to analyze the problems associated with large constrained instantons. In particular, these large instantons are found to suffer from large quantum fluctuations. In gravity we find the same situation: large quantum fluctuations around large wormholes. In both cases we expect that these large fluctuations are a signal that large constrained solutions are not important in the path integral. Thus, we argue that only small wormholes are important in Coleman's theory. (orig.)

  16. Constrained noninformative priors

    International Nuclear Information System (INIS)

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  17. Maximum Fidelity

    CERN Document Server

    Kinkhabwala, Ali

    2013-01-01

    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  18. The inverse maximum dynamic flow problem

    Institute of Scientific and Technical Information of China (English)

    BAGHERIAN; Mehri

    2010-01-01

    We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

  19. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  20. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  1. Constraining Galileon inflation

    Energy Technology Data Exchange (ETDEWEB)

    Regan, Donough; Anderson, Gemma J.; Hull, Matthew; Seery, David, E-mail: D.Regan@sussex.ac.uk, E-mail: G.Anderson@sussex.ac.uk, E-mail: Matthew.Hull@port.ac.uk, E-mail: D.Seery@sussex.ac.uk [Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QH (United Kingdom)

    2015-02-01

    In this short paper, we present constraints on the Galileon inflationary model from the CMB bispectrum. We employ a principal-component analysis of the independent degrees of freedom constrained by data and apply this to the WMAP 9-year data to constrain the free parameters of the model. A simple Bayesian comparison establishes that support for the Galileon model from bispectrum data is at best weak.

  2. Maximum Autocorrelation Factorial Kriging

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.;

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an...

  3. Constrained Jastrow calculations

    International Nuclear Information System (INIS)

    An alternative to Pandharipande's lowest order constrained variational prescription for dense Fermi fluids is presented which is justified on both physical and strict variational grounds. Excellent results are obtained when applied to the 'homework problem' of Bethe, in sharp contrast to those obtained from the Pandharipande prescription. (Auth.)

  4. Constrained Canonical Correlation.

    Science.gov (United States)

    DeSarbo, Wayne S.; And Others

    1982-01-01

    A variety of problems associated with the interpretation of traditional canonical correlation are discussed. A response surface approach is developed which allows for investigation of changes in the coefficients while maintaining an optimum canonical correlation value. Also, a discrete or constrained canonical correlation method is presented. (JKS)

  5. Constrained superfields in Supergravity

    CERN Document Server

    Dall'Agata, Gianguido

    2015-01-01

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  6. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.; Kirkegaard, Casper C.; Auken, Esben

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted by...... using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes this by......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...

  7. Maximum power demand cost

    International Nuclear Information System (INIS)

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some

  8. Fuzzy Maximum Satisfiability

    OpenAIRE

    Halaby, Mohamed El; Abdalla, Areeg

    2016-01-01

    In this paper, we extend the Maximum Satisfiability (MaxSAT) problem to {\\L}ukasiewicz logic. The MaxSAT problem for a set of formulae {\\Phi} is the problem of finding an assignment to the variables in {\\Phi} that satisfies the maximum number of formulae. Three possible solutions (encodings) are proposed to the new problem: (1) Disjunctive Linear Relations (DLRs), (2) Mixed Integer Linear Programming (MILP) and (3) Weighted Constraint Satisfaction Problem (WCSP). Like its Boolean counterpart,...

  9. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    \\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  10. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...

  11. Maximum abundant isotopes correlation

    International Nuclear Information System (INIS)

    The neutron excess of the most abundant isotopes of the element shows an overall linear dependence upon the neutron number for nuclei between neutron closed shells. This maximum abundant isotopes correlation supports the arguments for a common history of the elements during nucleosynthesis. (Auth.)

  12. Maximum information photoelectron metrology

    CERN Document Server

    Hockett, P; Wollenhaupt, M; Baumert, T

    2015-01-01

    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  13. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  14. Maximum entropy methods

    International Nuclear Information System (INIS)

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  15. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  16. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  17. Probable maximum flood control

    International Nuclear Information System (INIS)

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  18. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  19. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  20. Antiprotons at Solar Maximum

    CERN Document Server

    Bieber, J W; Engel, R; Gaisser, T K; Roesler, S; Stanev, T; Bieber, John W.; Engel, Ralph; Gaisser, Thomas K.; Roesler, Stefan; Stanev, Todor

    1999-01-01

    New measurements with good statistics will make it possible to observe the time variation of cosmic antiprotons at 1 AU through the approaching peak of solar activity. We report a new computation of the interstellar antiproton spectrum expected from collisions between cosmic protons and the interstellar gas. This spectrum is then used as input to a steady-state drift model of solar modulation, in order to provide predictions for the antiproton spectrum as well as the antiproton/proton ratio at 1 AU. Our model predicts a surprisingly large, rapid increase in the antiproton/proton ratio through the next solar maximum, followed by a large excursion in the ratio during the following decade.

  1. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  2. Maximum Likelihood Mosaics

    CERN Document Server

    Pires, Bernardo Esteves

    2010-01-01

    The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.

  3. Numerical PDE-constrained optimization

    CERN Document Server

    De los Reyes, Juan Carlos

    2015-01-01

    This book introduces, in an accessible way, the basic elements of Numerical PDE-Constrained Optimization, from the derivation of optimality conditions to the design of solution algorithms. Numerical optimization methods in function-spaces and their application to PDE-constrained problems are carefully presented. The developed results are illustrated with several examples, including linear and nonlinear ones. In addition, MATLAB codes, for representative problems, are included. Furthermore, recent results in the emerging field of nonsmooth numerical PDE constrained optimization are also covered. The book provides an overview on the derivation of optimality conditions and on some solution algorithms for problems involving bound constraints, state-constraints, sparse cost functionals and variational inequality constraints.

  4. Bagging constrained equity premium predictors

    DEFF Research Database (Denmark)

    Hillebrand, Eric; Lee, Tae-Hwy; Medeiros, Marcelo

    2014-01-01

    regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock...

  5. The Constrained Bottleneck Transportation Problem

    OpenAIRE

    Peerayuth Charnsethikul; Saeree Svetasreni

    2007-01-01

    Two classes of the bottleneck transportation problem with an additional budget constraint are introduced. An exact approach was proposed to solve both problem classes with proofs of correctness and complexity. Moreover, the approach was extended to solve a class of multi-commodity transportation network with a special case of the multi-period constrained bottleneck assignment problem.

  6. Constrained Clustering With Imperfect Oracles.

    Science.gov (United States)

    Zhu, Xiatian; Loy, Chen Change; Gong, Shaogang

    2016-06-01

    While clustering is usually an unsupervised operation, there are circumstances where we have access to prior belief that pairs of samples should (or should not) be assigned with the same cluster. Constrained clustering aims to exploit this prior belief as constraint (or weak supervision) to influence the cluster formation so as to obtain a data structure more closely resembling human perception. Two important issues remain open: 1) how to exploit sparse constraints effectively and 2) how to handle ill-conditioned/noisy constraints generated by imperfect oracles. In this paper, we present a novel pairwise similarity measure framework to address the above issues. Specifically, in contrast to existing constrained clustering approaches that blindly rely on all features for constraint propagation, our approach searches for neighborhoods driven by discriminative feature selection for more effective constraint diffusion. Crucially, we formulate a novel approach to handling the noisy constraint problem, which has been unrealistically ignored in the constrained clustering literature. Extensive comparative results show that our method is superior to the state-of-the-art constrained clustering approaches and can generally benefit existing pairwise similarity-based data clustering algorithms, such as spectral clustering and affinity propagation. PMID:25622327

  7. Constrained Graph Optimization: Interdiction and Preservation Problems

    Energy Technology Data Exchange (ETDEWEB)

    Schild, Aaron V [Los Alamos National Laboratory

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  8. Constrained Multiobjective Biogeography Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hongwei Mo

    2014-01-01

    Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.

  9. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  10. Constrained ballistics and geometrical optics

    OpenAIRE

    Epstein, Marcelo

    2014-01-01

    The problem of constant-speed ballistics is studied under the umbrella of non-linear non-holonomic constrained systems. The Newtonian approach is shown to be equivalent to the use of Chetaev's rule to incorporate the constraint within the initially unconstrained formulation. Although the resulting equations are not, in principle, obtained from a variational statement, it is shown that the trajectories coincide with those of geometrical optics in a medium with a suitably chosen refractive inde...

  11. Bagging Constrained Equity Premium Predictors

    OpenAIRE

    Tae-Hwy Lee; Eric Hillebrand; Marcelo Medeiros

    2013-01-01

    The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. ...

  12. Enumeration of Maximum Acyclic Hypergraphs

    Institute of Scientific and Technical Information of China (English)

    Jian-fang Wang; Hai-zhu Li

    2002-01-01

    Acyclic hypergraphs are analogues of forests in graphs. They are very useful in the design of databases. In this article, the maximum size of an acyclic hypergraph is determined and the number of maximum r-uniform acyclic hypergraphs of order n is shown to be ( n t-1 )(n(r-1)-r2 +2r)n-r-1.

  13. Image compression using constrained relaxation

    Science.gov (United States)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  14. Constraining Lorentz violation with cosmology.

    Science.gov (United States)

    Zuntz, J A; Ferreira, P G; Zlosnik, T G

    2008-12-31

    The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities. PMID:19113765

  15. Compositions constrained by graph Laplacian minors

    CERN Document Server

    Braun, Benjamin; Harrison, Ashley; McKim, Jessica; Noll, Jenna; Taylor, Clifford

    2012-01-01

    Motivated by examples of symmetrically constrained compositions, super convex partitions, and super convex compositions, we initiate the study of partitions and compositions constrained by graph Laplacian minors. We provide a complete description of the multivariate generating functions for such compositions in the case of trees. We answer a question due to Corteel, Savage, and Wilf regarding super convex compositions, which we describe as compositions constrained by Laplacian minors for cycles; we extend this solution to the study of compositions constrained by Laplacian minors of leafed cycles. Connections are established and conjectured between compositions constrained by Laplacian minors of leafed cycles of prime length and algebraic/combinatorial properties of reflexive simplices.

  16. Quantum Annealing for Constrained Optimization

    Science.gov (United States)

    Hen, Itay; Spedalieri, Federico M.

    2016-03-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that promise to solve certain combinatorial optimization problems of practical relevance faster than their classical analogues. The applicability of such devices for many theoretical and real-world optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which, in turn, requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here, we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby reduces the number of required couplers and removes the need for minor embedding, greatly reducing the number of required physical qubits. We argue the advantages of the proposed technique and illustrate its effectiveness. We conclude by discussing the experimental feasibility of the suggested method as well as its potential to appreciably reduce the resource requirements for implementing optimization problems on quantum annealers and its significance in the field of quantum computing.

  17. Maximum-entropy probability distributions under Lp-norm constraints

    Science.gov (United States)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  18. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds on the...... capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  19. Time efficient spacecraft maneuver using constrained torque distribution

    Science.gov (United States)

    Cao, Xibin; Yue, Chengfei; Liu, Ming; Wu, Baolin

    2016-06-01

    This paper investigates the time efficient maneuver of rigid satellites with inertia uncertainty and bounded external disturbance. A redundant cluster of four reaction wheels is used to control the spacecraft. To make full use of the controllability and avoid frequent unload for reaction wheels, a maximum output torque and maximum angular momentum constrained torque distribution method is developed. Based on this distribution approach, the maximum allowable acceleration and velocity of the satellite are optimized during the maneuvering. A novel braking curve is designed on the basis of the optimization strategy of the control torque distribution. A quaternion-based sliding mode control law is proposed to render the state to track the braking curve strictly. The designed controller provides smooth control torque, time efficiency and high control precision. Finally, practical numerical examples are illustrated to show the effectiveness of the developed torque distribution strategy and control methodology.

  20. Constraining Cosmic Evolution of Type Ia Supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  1. iBGP and Constrained Connectivity

    CERN Document Server

    Dinitz, Michael

    2011-01-01

    We initiate the theoretical study of the problem of minimizing the size of an iBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than $\\Omega(\\log n)$ and provide approximation algorithms with ratio $\\tilde{O}(\\sqrt{n})$. In addition, we give a slightly worse $\\tilde{O}(n^{2/3})$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems. The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph $G=(V,E)$, and for every pair of vertices $u,v \\in V$ we are given a set $S(u,...

  2. Generalized Maximum Entropy Estimation of Discrete Sequential Move Games of Perfect Information

    OpenAIRE

    Wang, Yafeng; Graham, Brett

    2013-01-01

    We propose a data-constrained generalized maximum entropy (GME) estimator for discrete sequential move games of perfect information which can be easily implemented on optimization software with high-level interfaces such as GAMS. Unlike most other work on the estimation of complete information games, the method we proposed is data constrained and does not require simulation and normal distribution of random preference shocks. We formulate the GME estimation as a (convex) mixed-integer nonline...

  3. Maximum magnitude in the Lower Rhine Graben

    Science.gov (United States)

    Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry

    2014-05-01

    Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the

  4. Constrained correlation dynamics of QCD

    International Nuclear Information System (INIS)

    The complete version of constrained correlation dynamics of SU(N) gauge theories in temporal gauge and canonical form has been formulated in three steps. (1) With the aid of generating-functional technique and in the framework of correlation dynamics, a closed set of equations of motion for correlation Green's functions have been established. (2) Gauge constraint conditions are analysed by means of Dirac theory. The algebraic representations of Gauss law and Ward identities are given. In accordance with the truncation approximations of correlation dynamics, the conserved Gauss law and Ward identities due to residual gauge invariance are shifted to initial value problems. (3) The equations of motion for multi-time correlation Green's functions have been transformed into those for equal-time correlation Green's functions. In two-body truncation approximation, a tractable set of equations of motion, Gauss law, and Ward identities are given explicitly

  5. Constrained Allocation Flux Balance Analysis

    CERN Document Server

    Mori, Matteo; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferr...

  6. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  7. Constraining Modified Gravity Theories With Cosmology

    OpenAIRE

    Martinelli, Matteo

    2012-01-01

    We study and constrain the Hu and Sawicki f(R) model using CMB and weak lensing forecasted data. We also use the same data to constrain extended theories of gravity and the subclass of f(R) theories using a general parameterization describing departures from General Relativity. Moreover we study and constrain also a Dark Coupling model where Dark Energy and Dark Matter are coupled toghether.

  8. Space-Constrained Interval Selection

    OpenAIRE

    Emek, Yuval; Halldorsson, Magnus M.; Rosen, Adi

    2012-01-01

    We study streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A deterministic 2-approximation streaming algorithm for this problem is developed, together with an algorithm for the special case of proper intervals, achieving improved approximation ratio of 3/2. We complement these upper bounds by proving that they are essentially best possible in the streaming setting: it is shown that an approximation ratio of $2 - \\e...

  9. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  10. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system

  11. Decomposition using Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2002-01-01

    , normally we have an ordering of landmarks (variables) along the contour of the objects. For the case with observation ordering the maximum autocorrelation factor (MAF) transform was proposed for multivariate imagery in\\verb+~+\\$\\backslash\\$cite{switzer85}. This corresponds to a R-mode analyse of the data...

  12. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino;

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... of the Markov random field defined by the 2-D constraint is estimated to be (upper bounded by) 0.8570 bits/symbol using the iterative technique of Belief Propagation on 2 £ 2 finite lattices. Based on combinatorial bounding techniques the maximum entropy for the constraint was determined to be 0.848....

  13. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  14. Positive Scattering Cross Sections using Constrained Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-09-27

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

  15. Constrained Allocation Flux Balance Analysis

    Science.gov (United States)

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  16. Gyrification from constrained cortical expansion

    CERN Document Server

    Tallinen, Tuomas; Biggins, John S; Mahadevan, L

    2015-01-01

    The exterior of the mammalian brain - the cerebral cortex - has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highl...

  17. Maximum Power Point Regulator System

    Science.gov (United States)

    Simola, J.; Savela, K.; Stenberg, J.; Tonicello, F.

    2011-10-01

    The target of the study done under the ESA contract No.17830/04/NL/EC (GSTP4) for Maximum Power Point Regulator System (MPPRS) was to investigate, design and test a modular power system (a core PCU) fulfilling requirement for maximum power transfer even after a single failure in the Power System by utilising a power concept without any potential and credible single point failure. The studied MPPRS concept is of a modular construction, able to track the MPP individually on each SA sections, maintaining its functionality and full power capability after a loss of a complete MPPR module (by utilizingN+1module).Various add-on DCDC converter topology candidates were investigated and redundancy, failure mechanisms and protection aspects were studied

  18. Maximum matching on random graphs

    OpenAIRE

    Zhou, Haijun; Ou-Yang, Zhong-Can

    2003-01-01

    The maximum matching problem on random graphs is studied analytically by the cavity method of statistical physics. When the average vertex degree \\mth{c} is larger than \\mth{2.7183}, groups of max-matching patterns which differ greatly from each other {\\em gradually} emerge. An analytical expression for the max-matching size is also obtained, which agrees well with computer simulations. Discussion is made on this {\\em continuous} glassy phase transition and the absence of such a glassy phase ...

  19. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  20. D(Maximum)=P(Argmaximum)

    CERN Document Server

    Remizov, Ivan D

    2009-01-01

    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  1. Homogeneous determination of maximum magnitude

    OpenAIRE

    Meletti, C.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; D'Amico, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; Martinelli, F.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia

    2010-01-01

    This deliverable represents the result of the activities performed by a working group at INGV. The main object of the Task 3.5 is defined in the Description of Work. This task will produce a homogeneous assessment (possibly multiple models) of the distribution of the expected Maximum Magnitude for earthquakes expected in various tectonic provinces of Europe, to serve as input for the computation and validation of seismic hazard. This goal will be achieved by combining input from earthqu...

  2. Indistinguishability, symmetrisation and maximum entropy

    International Nuclear Information System (INIS)

    It is demonstrated that the distributions over single-particle states for Boltzmann, Bose-Einstein and Fermi-Dirac statistics describing N non-interacting identical particles follow directly from the principle of maximum entropy. It is seen that the notions of indistinguishability and coarse graining are secondary, if not irrelevant. A detailed examination of the structure of the Boltzmann limit is provided. (author)

  3. Solar maximum: solar array degradation

    International Nuclear Information System (INIS)

    The 5-year in-orbit power degradation of the silicon solar array aboard the Solar Maximum Satellite was evaluated. This was the first spacecraft to use Teflon R FEP as a coverglass adhesive, thus avoiding the necessity of an ultraviolet filter. The peak power tracking mode of the power regulator unit was employed to ensure consistent maximum power comparisons. Telemetry was normalized to account for the effects of illumination intensity, charged particle irradiation dosage, and solar array temperature. Reference conditions of 1.0 solar constant at air mass zero and 301 K (28 C) were used as a basis for normalization. Beginning-of-life array power was 2230 watts. Currently, the array output is 1830 watts. This corresponds to a 16 percent loss in array performance over 5 years. Comparison of Solar Maximum Telemetry and predicted power levels indicate that array output is 2 percent less than predictions based on an annual 1.0 MeV equivalent election fluence of 2.34 x ten to the 13th power square centimeters space environment

  4. Groundwater availability as constrained by hydrogeology and environmental flows

    Science.gov (United States)

    Watson, Katelyn A.; Mayer, Alex S.; Reeves, Howard W.

    2014-01-01

    Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications.

  5. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based on......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus in...

  6. Constrained Deformable-Layer Tomography

    Science.gov (United States)

    Zhou, H.

    2006-12-01

    The improvement on traveltime tomography depends on improving data coverage and tomographic methodology. The data coverage depends on the spatial distribution of sources and stations, as well as the extent of lateral velocity variation that may alter the raypaths locally. A reliable tomographic image requires large enough ray hit count and wide enough angular range between traversing rays over the targeted anomalies. Recent years have witnessed the advancement of traveltime tomography in two aspects. One is the use of finite frequency kernels, and the other is the improvement on model parameterization, particularly that allows the use of a priori constraints. A new way of model parameterization is the deformable-layer tomography (DLT), which directly inverts for the geometry of velocity interfaces by varying the depths of grid points to achieve a best traveltime fit. In contrast, conventional grid or cell tomography seeks to determine velocity values of a mesh of fixed-in-space grids or cells. In this study, the DLT is used to map crustal P-wave velocities with first arrival data from local earthquakes and two LARSE active surveys in southern California. The DLT solutions along three profiles are constrained using known depth ranges of the Moho discontinuity at 21 sites from a previous receiver function study. The DLT solutions are generally well resolved according to restoration resolution tests. The patterns of 2D DLT models of different profiles match well at their intersection locations. In comparison with existing 3D cell tomography models in southern California, the new DLT models significantly improve the data fitness. In comparison with the multi-scale cell tomography conducted for the same data, while the data fitting levels of the DLT and the multi-scale cell tomography models are compatible, the DLT provides much higher vertical resolution and more realistic description of the undulation of velocity discontinuities. The constraints on the Moho depth

  7. Economics and Maximum Entropy Production

    Science.gov (United States)

    Lorenz, R. D.

    2003-04-01

    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  8. A Nonsmooth Maximum Principle for Optimal Control Problems with State and Mixed Constraints-Convex Case

    OpenAIRE

    Biswas, Md. Haider Ali; de Pinho, Maria do Rosario

    2013-01-01

    Here we derive a nonsmooth maximum principle for optimal control problems with both state and mixed constraints. Crucial to our development is a convexity assumption on the "velocity set". The approach consists of applying known penalization techniques for state constraints together with recent results for mixed constrained problems.

  9. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  10. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  11. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  12. Constrained Bimatrix Games in Wireless Communications

    OpenAIRE

    Firouzbakht, Koorosh; Noubir, Guevara; Salehi, Masoud

    2015-01-01

    We develop a constrained bimatrix game framework that can be used to model many practical problems in many disciplines, including jamming in packetized wireless networks. In contrast to the widely used zero-sum framework, in bimatrix games it is no longer required that the sum of the players' utilities to be zero or constant, thus, can be used to model a much larger class of jamming problems. Additionally, in contrast to the standard bimatrix games, in constrained bimatrix games the players' ...

  13. Constrained school choice : an experimental study

    OpenAIRE

    Calsamiglia, Caterina; Haeringer, Guillaume; Klijn, Flip

    2008-01-01

    The literature on school choice assumes that families can submit a preference list over all the schools they want to be assigned to. However, in many real-life instances families are only allowed to submit a list containing a limited number of schools. Subjects' incentives are drastically affected, as more individuals manipulate their preferences. Including a safety school in the constrained list explains most manipulations. Competitiveness across schools plays an important role. Constraining...

  14. Constraining pion interactions at very high energies by cosmic ray data

    CERN Document Server

    Ostapchenko, Sergey

    2016-01-01

    We demonstrate that a substantial part of the present uncertainties in model predictions for the average maximum depth of cosmic ray-induced extensive air showers is related to very high energy pion-air collisions. Our analysis shows that the position of the maximum of the muon production profile in air showers is strongly sensitive to the properties of such interactions. Therefore, the measurements of the maximal muon production depth by cosmic ray experiments provide a unique opportunity to constrain the treatment of pion-air interactions at very high energies and to reduce thereby model-related uncertainties for the shower maximum depth.

  15. Constraining pion interactions at very high energies by cosmic ray data

    Science.gov (United States)

    Ostapchenko, Sergey; Bleicher, Marcus

    2016-03-01

    We demonstrate that a substantial part of the present uncertainties in model predictions for the average maximum depth of cosmic ray-induced extensive air showers is related to very high energy pion-air collisions. Our analysis shows that the position of the maximum of the muon production profile in air showers is strongly sensitive to the properties of such interactions. Therefore, the measurements of the maximal muon production depth by cosmic ray experiments provide a unique opportunity to constrain the treatment of pion-air interactions at very high energies and to reduce thereby model-related uncertainties for the shower maximum depth.

  16. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    F W Giacobbe

    2003-03-01

    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  17. The maximum drag reduction asymptote

    Science.gov (United States)

    Choueiri, George H.; Hof, Bjorn

    2015-11-01

    Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].

  18. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  19. Forecasting Maximum Demand And Loadshedding

    Directory of Open Access Journals (Sweden)

    Dhabai Poonam. B

    2014-05-01

    Full Text Available The intention of this paper is to priorly estimate the maximum demand (MD during the running slots. The forecasting of MD will help us to save the extra bill charged. The MD is calculated by two methods basically : graphically and mathematically. It will help us to control the total demand, and reduce the effective cost. With help of forecasting MD, we can even perform load shedding if our MD will be exceeding the contract demand (CD. Load shedding is performed as per the load requirement. After load shedding, the MD can be brought under control and hence we can avoid the extra charges which are to be paid under the conditions if our MD exceeds the CD. This scheme is being implemented in various industries. For forecasting the MD we have to consider various zones as: load flow analysis, relay safe operating area (SOA, ratings of the equipments installed, etc. The estimation of MD and load shedding (LS can be also done through automated process such as programming in PLC’s. The automated system is very much required in the industrial zones. This saves the valuable time, as well as the labor work required. The PLC and SCADA software helps a lot in automation technique. To calculate the MD the ratings of each and every equipment installed in the premises is considered. The estimation of MD and LS program will avoid the industries from paying the huge penalties for the electricity companies. This leads to the bright future scope of this concept in the rapid industrialization sector, energy sectors.

  20. A constrained two-layer compression technique for ECG waves.

    Science.gov (United States)

    Byun, Kyungguen; Song, Eunwoo; Shim, Hwan; Lim, Hyungjoon; Kang, Hong-Goo

    2015-08-01

    This paper proposes a constrained two-layer compression technique for electrocardiogram (ECG) waves, of which encoded parameters can be directly used for the diagnosis of arrhythmia. In the first layer, a single ECG beat is represented by one of the registered templates in the codebook. Since the required coding parameter in this layer is only the codebook index of the selected template, its compression ratio (CR) is very high. Note that the distribution of registered templates is also related to the characteristics of ECG waves, thus it can be used as a metric to detect various types of arrhythmias. The residual error between the input and the selected template is encoded by a wavelet-based transform coding in the second layer. The number of wavelet coefficients is constrained by pre-defined maximum distortion to be allowed. The MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. The proposed algorithm shows around 7.18 CR when the reference value of percentage root mean square difference (PRD) is set to ten. PMID:26737691

  1. Hybrid Biogeography Based Optimization for Constrained Numerical and Engineering Optimization

    Directory of Open Access Journals (Sweden)

    Zengqiang Mi

    2015-01-01

    Full Text Available Biogeography based optimization (BBO is a new competitive population-based algorithm inspired by biogeography. It simulates the migration of species in nature to share information. A new hybrid BBO (HBBO is presented in the paper for constrained optimization. By combining differential evolution (DE mutation operator with simulated binary crosser (SBX of genetic algorithms (GAs reasonably, a new mutation operator is proposed to generate promising solution instead of the random mutation in basic BBO. In addition, DE mutation is still integrated to update one half of population to further lead the evolution towards the global optimum and the chaotic search is introduced to improve the diversity of population. HBBO is tested on twelve benchmark functions and four engineering optimization problems. Experimental results demonstrate that HBBO is effective and efficient for constrained optimization and in contrast with other state-of-the-art evolutionary algorithms (EAs, the performance of HBBO is better, or at least comparable in terms of the quality of the final solutions and computational cost. Furthermore, the influence of the maximum mutation rate is also investigated.

  2. Vibration control of cylindrical shells using active constrained layer damping

    Science.gov (United States)

    Ray, Manas C.; Chen, Tung-Huei; Baz, Amr M.

    1997-05-01

    The fundamentals of controlling the structural vibration of cylindrical shells treated with active constrained layer damping (ACLD) treatments are presented. The effectiveness of the ACLD treatments in enhancing the damping characteristics of thin cylindrical shells is demonstrated theoretically and experimentally. A finite element model (FEM) is developed to describe the dynamic interaction between the shells and the ACLD treatments. The FEM is used to predict the natural frequencies and the modal loss factors of shells which are partially treated with patches of the ACLD treatments. The predictions of the FEM are validated experimentally using stainless steel cylinders which are 20.32 cm in diameter, 30.4 cm in length and 0.05 cm in thickness. The cylinders are treated with ACLD patches of different configurations in order to target single or multi-modes of lobar vibrations. The ACLD patches used are made of DYAD 606 visco-elastic layer which is sandwiched between two layers of PVDF piezo-electric films. Vibration attenuations of 85% are obtained with maximum control voltage of 40 volts. Such attenuations are attributed to the effectiveness of the ACLD treatment in increasing the modal damping ratios by about a factor of four over those of conventional passive constrained layer damping (PCLD) treatments. The obtained results suggest the potential of the ACLD treatments in controlling the vibration of cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles.

  3. Constraining Ceres' interior from its Rotational Motion

    CERN Document Server

    Rambaux, Nicolas; Dehant, Véronique; Kuchynka, Petr

    2011-01-01

    Context. Ceres is the most massive body of the asteroid belt and contains about 25 wt.% (weight percent) of water. Understanding its thermal evolution and assessing its current state are major goals of the Dawn Mission. Constraints on internal structure can be inferred from various observations. Especially, detailed knowledge of the rotational motion can help constrain the mass distribution inside the body, which in turn can lead to information on its geophysical history. Aims. We investigate the signature of the interior on the rotational motion of Ceres and discuss possible future measurements performed by the spacecraft Dawn that will help to constrain Ceres' internal structure. Methods. We compute the polar motion, precession-nutation, and length-of-day variations. We estimate the amplitudes of the rigid and non-rigid response for these various motions for models of Ceres interior constrained by recent shape data and surface properties. Results. As a general result, the amplitudes of oscillations in the r...

  4. Towards weakly constrained double field theory

    Science.gov (United States)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  5. Towards Weakly Constrained Double Field Theory

    CERN Document Server

    Lee, Kanghoon

    2015-01-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X- ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  6. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  7. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki;

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of...... constrained orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  8. Geometric constrained variational calculus. III: The second variation (Part II)

    Science.gov (United States)

    Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico

    2016-03-01

    The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.

  9. Constrained optimization of gradient waveforms for generalized diffusion encoding.

    Science.gov (United States)

    Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans

    2015-12-01

    Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences. PMID:26583528

  10. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  11. Constraining white dwarf structure and neutrino physics in 47 Tucanae

    CERN Document Server

    Goldsbury, Ryan; Richer, Harvey; Kalirai, Jason; Tremblay, Pier-Emmanuel

    2016-01-01

    We present a robust statistical analysis of the white dwarf cooling sequence in 47 Tucanae. We combine HST UV and optical data in the core of the cluster, Modules for Experiments in Stellar Evolution (MESA) white dwarf cooling models, white dwarf atmosphere models, artificial star tests, and a Markov Chain Monte Carlo (MCMC) sampling method to fit white dwarf cooling models to our data directly. We use a technique known as the unbinned maximum likelihood to fit these models to our data without binning. We use these data to constrain neutrino production and the thickness of the hydrogen layer in these white dwarfs. The data prefer thicker hydrogen layers $(q_\\mathrm{H}=3.2\\e{-5})$ and we can strongly rule out thin layers $(q_\\mathrm{H}=10^{-6})$. The neutrino rates currently in the models are consistent with the data. This analysis does not provide a constraint on the number of neutrino species.

  12. Constraining White Dwarf Structure and Neutrino Physics in 47 Tucanae

    Science.gov (United States)

    Goldsbury, R.; Heyl, J.; Richer, H. B.; Kalirai, J. S.; Tremblay, P. E.

    2016-04-01

    We present a robust statistical analysis of the white dwarf cooling sequence in 47 Tucanae. We combine Hubble Space Telescope UV and optical data in the core of the cluster, Modules for Experiments in Stellar Evolution (MESA) white dwarf cooling models, white dwarf atmosphere models, artificial star tests, and a Markov Chain Monte Carlo sampling method to fit white dwarf cooling models to our data directly. We use a technique known as the unbinned maximum likelihood to fit these models to our data without binning. We use these data to constrain neutrino production and the thickness of the hydrogen layer in these white dwarfs. The data prefer thicker hydrogen layers ({q}{{H}}=3.2× {10}-5) and we can strongly rule out thin layers ({q}{{H}}={10}-6). The neutrino rates currently in the models are consistent with the data. This analysis does not provide a constraint on the number of neutrino species.

  13. Constrained optimization of gradient waveforms for generalized diffusion encoding

    Science.gov (United States)

    Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans

    2015-12-01

    Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences.

  14. Constrained instanton and black hole creation

    Institute of Scientific and Technical Information of China (English)

    WU; Zhongchao; XU; Donghui

    2004-01-01

    A gravitational instanton is considered as the seed for the creation of a universe. However, there exist too few instantons. To include many interesting phenomena in the framework of quantum cosmology, the concept of constrained gravitational instanton is inevitable. In this paper we show how a primordial black hole is created from a constrained instanton. The quantum creation of a generic black hole in the closed or open background is completely resolved. The relation of the creation scenario with gravitational thermodynamics and topology is discussed.

  15. Weighted Constrained Egalitarianism in TU-Games

    OpenAIRE

    Koster, M.A.L.

    1999-01-01

    The constrained egalitarian solution of Dutta and Ray (1989) for TU-games is extended to asymmetric cases, using the notion of weight systems as in Kalai and Samet (1987,1988). This weighted constrained egalitarian solution is based on the weighted Lorenz-criterion as an inequality measure. It is shown that in general there is at most one such weighted egalitarian solution for TU-games. Existence is proved for the class of convex games. Furthermore, the core of a postive valued convex game is...

  16. Constraining Initial Vacuum by CMB Data

    CERN Document Server

    Chandra, Debabrata

    2016-01-01

    We demonstrate how one can possibly constrain the initial vacuum using CMB data. Using a generic vacuum without any particular choice a priori, thereby keeping both the Bogolyubov coefficients in the analysis, we compute observable parameters from two- and three-point correlation functions. We are thus left with constraining four model parameters from the two complex Bogolyubov coefficients. We also demonstrate a method of finding out the constraint relations between the Bogolyubov coefficients using the theoretical normalization condition and observational data of power spectrum and bispectrum from CMB. We also discuss the possible pros and cons of the analysis.

  17. Murder and Self-constrained Modernity

    DEFF Research Database (Denmark)

    Hansen, Kim Toft

    Fracture”, 1999) deals with an unexplainable metaphysical horror. This short story employs a certain tragic sensibility to the narrative which no longer is a stranger to crime fiction. Arne Dahl utilizes Aeschylus’ The Oresteia which goes for two episodes of the Danish TV-series Rejseholdet (Unit One, 2002...... this paper I will approach an explanation from the point of view of what the Danish philosopher Hans Jørgen Schanz calls the self-constrained modernity: Modernity has come to realize – he explicates – that it cannot provide complete explanations of reality and, thus, it becomes self-constrained. This...

  18. Ensemble and constrained clustering with applications

    OpenAIRE

    Abdala, D.D. (Daniel)

    2011-01-01

    Diese Arbeit stellt neue Entwicklungen in Ensemble und Constrained Clustering vor und enthält die folgenden wesentlichen Beiträge: 1) Eine Vereinigung von Constrained und Ensemble Clustering in einem einheitlichen Framework. 2) Eine neue Methode zur Messung und Visualisierung der Variabilität von Ensembles. 3) Ein neues, Random Walker basiertes Verfahren für Ensemble Clustering. 4) Anwendung von Ensemble Clustering für Bildsegmentierung. 5) Eine neue Consensus-Funktion für das Ensemble Cluste...

  19. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.

    Science.gov (United States)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  20. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238U neutron cross sections in the unresolved resonance region. (orig.)

  1. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    Energy Technology Data Exchange (ETDEWEB)

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  2. General Relativity as a constrained Gauge Theory

    OpenAIRE

    Cianci, R.; Vignolo, S.; Bruno, D

    2006-01-01

    The formulation of General Relativity presented in math-ph/0506077 and the Hamiltonian formulation of Gauge theories described in math-ph/0507001 are made to interact. The resulting scheme allows to see General Relativity as a constrained Gauge theory.

  3. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number of...

  4. INSTRUMENT CHOICE AND BUDGET-CONSTRAINED TARGETING

    OpenAIRE

    Horan, Richard D.; Claassen, Roger; Agapoff, Jean; Zhang, Wei

    2004-01-01

    We analyze how choosing to use a particular type of instrument for agri-environmental payments, when these payments are constrained by the regulatory authority's budget, implies an underlying targeting criterion with respect to costs, benefits, participation, and income, and the tradeoffs among these targeting criteria. The results provide insight into current policy debates.

  5. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement...

  6. Nonlinear wave equations and constrained harmonic motion

    OpenAIRE

    Deift, Percy; Lund, Fernando; Trubowitz, Eugene

    1980-01-01

    The study of the Korteweg-deVries, nonlinear Schrödinger, Sine-Gordon, and Toda lattice equations is simply the study of constrained oscillators. This is likely to be true for any nonlinear wave equation associated with a second-order linear problem.

  7. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  8. Double-sided fuzzy chance-constrained linear fractional programming approach for water resources management

    Science.gov (United States)

    Cui, Liang; Li, Yongping; Huang, Guohe

    2016-06-01

    A double-sided fuzzy chance-constrained fractional programming (DFCFP) method is developed for planning water resources management under uncertainty. In DFCFP the system marginal benefit per unit of input under uncertainty can also be balanced. The DFCFP is applied to a real case of water resources management in the Zhangweinan River Basin, China. The results show that the amounts of water allocated to the two cities (Anyang and Handan) would be different under minimum and maximum reliability degrees. It was found that the marginal benefit of the system solved by DFCFP is bigger than the system benefit under the minimum and maximum reliability degrees, which not only improve economic efficiency in the mass, but also remedy water deficiency. Compared with the traditional double-sided fuzzy chance-constrained programming (DFCP) method, the solutions obtained from DFCFP are significantly higher, and the DFCFP has advantages in water conservation.

  9. Positivity-Preserving Finite Difference WENO Schemes with Constrained Transport for Ideal Magnetohydrodynamic Equations

    OpenAIRE

    Christlieb, Andrew J.; Liu, Yuan; Tang, Qi; Xu, Zhengfu

    2014-01-01

    In this paper, we utilize the maximum-principle-preserving flux limiting technique, originally designed for high order weighted essentially non-oscillatory (WENO) methods for scalar hyperbolic conservation laws, to develop a class of high order positivity-preserving finite difference WENO methods for the ideal magnetohydrodynamic (MHD) equations. Our schemes, under the constrained transport (CT) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of...

  10. Output power and efficiency of electromagnetic energy harvesting systems with constrained range of motion

    International Nuclear Information System (INIS)

    In some energy harvesting systems, the maximum displacement of the seismic mass is limited due to the physical constraints of the device. This is especially the case where energy is harvested from a vibration source with large oscillation amplitude (e.g., marine environment). For the design of inertial systems, the maximum permissible displacement of the mass is a limiting condition. In this paper the maximum output power and the corresponding efficiency of linear and rotational electromagnetic energy harvesting systems with a constrained range of motion are investigated. A unified form of output power and efficiency is presented to compare the performance of constrained linear and rotational systems. It is found that rotational energy harvesting systems have a greater capability in transferring energy to the load resistance than linear directly coupled systems, due to the presence of an extra design variable, namely the ball screw lead. Also, in this paper it is shown that for a defined environmental condition and a given proof mass with constrained throw, the amount of power delivered to the electrical load by a rotational system can be higher than the amount delivered by a linear system. The criterion that guarantees this favourable design has been obtained. (paper)

  11. THE POLITICAL ECONOMY OF FOOD STANDARD DETERMINATION: INTERNATIONAL EVIDENCE FROM MAXIMUM RESIDUE LIMITS

    OpenAIRE

    Li, Yuan; Xiong, Bo; Beghin, John C.

    2013-01-01

    Food safety standards have proliferated as multilateral and bilateral trade agreements constrain traditional barriers to agricultural trade. Stringent food standards can be driven by rising consumer and public concern about food safety and other social objectives, or by the lobbying efforts from domestic industries in agriculture. We investigate the economic and political determinants of the maximum residue limits (MRLs) on pesticides and veterinary drugs. Using a political economy framework ...

  12. Estimation of Maximum Wind Speeds in Tornadoes

    OpenAIRE

    Dergarabedian, Paul; Fendell, Francis

    2011-01-01

    A method is proposed for rapidly estimating the maximum value of the azimuthal velocity component (maximum swirling speed) in tornadoes and waterspouts. The method requires knowledge of the cloud-deck height and a photograph of the funnel cloud—data usually available. Calculations based on this data confirm that the lower maximum wind speeds suggested by recent workers (roughly one-quarter of the sonic speed for sea-level air) are more plausible for tornadoes than the sonic speed sometimes ci...

  13. Solving maximum cut problems by simulated annealing

    OpenAIRE

    Myklebust, Tor G. J.

    2015-01-01

    This paper gives a straightforward implementation of simulated annealing for solving maximum cut problems and compares its performance to that of some existing heuristic solvers. The formulation used is classical, dating to a 1989 paper of Johnson, Aragon, McGeoch, and Schevon. This implementation uses no structure peculiar to the maximum cut problem, but its low per-iteration cost allows it to find better solutions than were previously known for 40 of the 89 standard maximum cut instances te...

  14. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  15. Constraining dark energy interacting models with WMAP

    CERN Document Server

    Olivares, G; Pavón, D; Olivares, German; Atrio-Barandela, Fernando; Pavon, Diego

    2006-01-01

    We determine the range of parameter space of an interacting quintessence (IQ) model that best fits the luminosity distance of type Ia supernovae data and the recent WMAP measurements of Cosmic Microwave Background temperature anisotropies. Models in which quintessence decays into dark matter provide a clean explanation for the coincidence problem. We focus on cosmological models of zero spatial curvature. We show that if the dark energy (DE) decays into cold dark matter (CDM) at a rate that brings the ratio of matter to dark energy constant at late times, the supernovae data are not sufficient to constrain the interaction parameter. On the contrary, WMAP data constrain it to be smaller than $c^2 < 10^{-2}$ at the $3\\sigma$ level. Accurate measurements of the Hubble constant and the dark energy density, independent of the CMB data, would support/disprove this set of models.

  16. Hyperbolicity and Constrained Evolution in Linearized Gravity

    CERN Document Server

    Matzner, R A

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them ({\\it unconstrained} evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly "more correct" to introduce a scheme which actively maintains the constraints by solution ({\\it constrained} evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with th...

  17. Constraining the braneworld with gravitational wave observations.

    Science.gov (United States)

    McWilliams, Sean T

    2010-04-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm. PMID:20481929

  18. Doubly Constrained Robust Blind Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Xin Song

    2013-01-01

    Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.

  19. Efficient caching for constrained skyline queries

    DEFF Research Database (Denmark)

    Mortensen, Michael Lind; Chester, Sean; Assent, Ira; Magnani, Matteo

    Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints...... are unlikely to match a cached query exactly, our proposed method identifies and exploits similar cached queries to reduce the computational overhead of subsequent ones. We consider interactive users posing a string of similar queries and show how these can be classified into four cases based on how...... they overlap cached queries. For each we present a specialized solution. For the general case of independent users, we introduce the Missing Points Region (MPR), that minimizes disk reads, and an approximation of the MPR. An extensive experimental evaluation reveals that the querying for an...

  20. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  1. Maximum mass, moment of inertia and compactness of relativistic stars

    Science.gov (United States)

    Breu, Cosima; Rezzolla, Luciano

    2016-06-01

    A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as `universal relations' and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalized Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, Mmax, simply in terms of the maximum mass of the non-rotating configuration, M_{_TOV}, finding that M_max ≃ (1.203 ± 0.022) M_{_TOV} for all the equations of state we have considered. We further introduce an improvement to previously published universal relations by Lattimer & Schutz between the dimensionless moment of inertia and the stellar compactness, which could provide an accurate tool to constrain the equation of state of nuclear matter when measurements of the moment of inertia become available.

  2. The consequence of maximum thermodynamic efficiency in Daisyworld.

    Science.gov (United States)

    Pujol, Toni

    2002-07-01

    The imaginary planet of Daisyworld is the simplest model used to illustrate the implications of the Gaia hypothesis. The dynamics of daisies and their radiative interaction with the environment are described by fundamental equations of population ecology theory and physics. The parameterization of the turbulent energy flux between areas of different biological cover is similar to the diffusive-type approximation used in simple climate models. Here I show that the small variation of the planetary diffusivity adopted in the classical version of Daisyworld limits the range of values for the solar insolation for which biota may grow in the planet. Recent studies suggest that heat transport in a turbulent medium is constrained to maximize its efficiency. This condition is almost equivalent to maximizing the rate of entropy production due to non-radiative sources. Here, I apply the maximum entropy principle (MEP) to Daisyworld. I conclude that the MEP sets the maximum range of values for the solar insolation with a non-zero amount of daisies. Outside this range, daisies cannot grow in the planet for any physically realistic climate distribution. Inside this range, I assume a distribution of daisies in agreement with the MEP. The results substantially enlarge the range of climate stability, due to the biota, in comparison to the classical version of Daisyworld. A very stable temperature is found when two different species grow in the planet. PMID:12183130

  3. Capacity constrained assignment in spatial databases

    DEFF Research Database (Denmark)

    U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos;

    2008-01-01

    Given a point set P of customers (e.g., WiFi receivers) and a point set Q of service providers (e.g., wireless access points), where each q 2 Q has a capacity q.k, the capacity constrained assignment (CCA) is a matching M Q × P such that (i) each point q 2 Q (p 2 P) appears at most k times (at most...

  4. Resource allocation for delay constrained wireless communications

    OpenAIRE

    Chen, J.

    2010-01-01

    The ultimate goal of future generation wireless communications is to provide ubiquitous seamless connections between mobile terminals such as mobile phones and computers so that users can enjoy high-quality services at anytime anywhere without wires. The feature to provide a wide range of delay constrained applications with diverse quality of service (QoS) requirements, such as delay and data rate requirements, will require QoS-driven wireless resource allocation mechanisms to efficiently ...

  5. Constrained optimization in expensive simulation: novel approach.

    OpenAIRE

    Jack P. C. Kleijnen; van Beers, Wim; VAN NIEUWENHUYSE, Inneke

    2010-01-01

    This article presents a novel heuristic for constrained optimization of computationally expensive random simulation models. One output is selected as objective to be minimized, while other outputs must satisfy given theshold values. Moreover, the simulation inputs must be integer and satisfy linear or nonlinear constraints. The heuristic combines (i) sequentialized experimental designs to specify the simulation input combinations, (ii) Kriging (or Gaussian process or spatial correlation model...

  6. Constrained optimization in simulation: a novel approach.

    OpenAIRE

    Jack P. C. Kleijnen; van Beers, W.C.M.; van Nieuwenhuyse, I.

    2008-01-01

    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations...

  7. Performance Characteristics of Active Constrained Layer Damping

    OpenAIRE

    A. Baz; J. Ro

    1995-01-01

    Theoretical and experimental performance characteristics of the new class of actively controlled constrained layer damping (ACLD) are presented. The ACLD consists of a viscoelastic damping layer sandwiched between two layers of piezoelectric sensor and actuator. The composite ACLD when bonded to a vibrating structure acts as a “smart” treatment whose shear deformation can be controlled and tuned to the structural response in order to enhance the energy dissipation mechanism and improve the vi...

  8. NEW SIMULATED ANNEALING ALGORITHMS FOR CONSTRAINED OPTIMIZATION

    OpenAIRE

    LINET ÖZDAMAR; CHANDRA SEKHAR PEDAMALLU

    2010-01-01

    We propose a Population based dual-sequence Non-Penalty Annealing algorithm (PNPA) for solving the general nonlinear constrained optimization problem. The PNPA maintains a population of solutions that are intermixed by crossover to supply a new starting solution for simulated annealing throughout the search. Every time the search gets stuck at a local optimum, this crossover procedure is triggered and simulated annealing search re-starts from a new subspace. In both the crossover and simulate...

  9. NTRU software implementation for constrained devices

    OpenAIRE

    Monteverde Giacomino, Mariano

    2008-01-01

    The NTRUEncrypt is a public-key cryptosystem based on the shortest vector problem. Its main characteristics are the low memory and computational requirements while providing a high security level. This document presents an implementation and optimization of the NTRU public-key cryptosys- tem for constrained devices. Speci cally the NTRU cryptosystem has been implemented on the ATMega128 and the ATMega163 microcontrollers. This has turned in a major e ort in order to reduce t...

  10. Modelling time-constrained software development

    OpenAIRE

    Powell, A.

    2004-01-01

    Commercial pressures on time-to-market often require the development of software in situations where deadlines are very tight and non-negotiable. This type of development can be termed ‘time-constrained software development.’ The need to compress development timescales influences both the software process and the way it is managed. Conventional approaches to modelling tend to treat the development process as being linear, sequential and static. Whereas, the processes used to achieve timescale...

  11. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  12. Constrained simulation of the Bullet Cluster

    International Nuclear Information System (INIS)

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.

  13. Hybrid evolutionary programming for heavily constrained problems.

    Science.gov (United States)

    Myung, H; Kim, J H

    1996-01-01

    A hybrid of evolutionary programming (EP) and a deterministic optimization procedure is applied to a series of non-linear and quadratic optimization problems. The hybrid scheme is compared with other existing schemes such as EP alone, two-phase (TP) optimization, and EP with a non-stationary penalty function (NS-EP). The results indicate that the hybrid method can outperform the other methods when addressing heavily constrained optimization problems in terms of computational efficiency and solution accuracy. PMID:8833746

  14. Optimal auctions with financially constrained bidders

    OpenAIRE

    Pai, Mallesh; Rakesh V. Vohra

    2008-01-01

    We consider an environment where potential buyers of an indi- visible good have liquidity constraints, in that they cannot pay more than their `budget' regardless of their valuation. A buyer's valuation for the good as well as her budget are her private information. We derive constrained-efficient and revenue maximizing auctions for this setting. In general, the optimal auction requires `pooling' both at the top and in the middle despite the maintained assumption of a mono- tone hazard rate. ...

  15. Constrained efficient locations under delivered pricing

    OpenAIRE

    Pires, Cesaltina

    2005-01-01

    In this article, we extend previous results on competitive delivered pricing by considering the second-best problem in which the social planner can regulate firm’s locations but not their pricing. Assuming constant marginal costs, we show that the constrained socially optimal locations are an equilibrium of the location-price game when: (i) demand is perfectly inelastic and (ii) demand is price sensitive but firms practice first-degree price discrimination. However, with elastic demand ...

  16. Pricing behaviour at capacity constrained facilities

    OpenAIRE

    Huric Larsen, Jesper Fredborg

    2012-01-01

    Entry of new firms can be difficult or even impossible at capacity constrained facilities, despite the actual cost of entering is low. Using a game theoretic model of incumbent firms’ pricing behaviour under these conditions, it is found that under the assumption of Bertrand competition and firms having different costs, the optimal pricing behaviour imply price stickiness and upward pricing. The findings further suggest a competitive behaviour of incumbents of disposing weaker opponents only ...

  17. Classical Dynamics as Constrained Quantum Dynamics

    OpenAIRE

    Bartlett, Stephen D.; Rowe, David J.

    2002-01-01

    We show that the classical mechanics of an algebraic model are implied by its quantizations. An algebraic model is defined, and the corresponding classical and quantum realizations are given in terms of a spectrum generating algebra. Classical equations of motion are then obtained by constraining the quantal dynamics of an algebraic model to an appropriate coherent state manifold. For the cases where the coherent state manifold is not symplectic, it is shown that there exist natural projectio...

  18. Capturing Hotspots For Constrained Indoor Movement

    OpenAIRE

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the...

  19. Constraining RRc candidates using SDSS colours

    CERN Document Server

    Bányai, E; Molnár, L; Dobos, L; Szabó, R

    2016-01-01

    The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.

  20. Cosmicflows Constrained Local UniversE Simulations

    CERN Document Server

    Sorce, Jenny G; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M; Steinmetz, Matthias; Tully, R Brent; Pomarede, Daniel; Carlesi, Edoardo

    2015-01-01

    This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observatio...

  1. 13 CFR 130.440 - Maximum grant.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum grant. 130.440 Section 130.440 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS DEVELOPMENT CENTERS § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of...

  2. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...... predictions. The results imply that physical processes control maximum particle concentrations in planktonic systems....

  3. An axiomatic characterization of the strong constrained egalitarian solution

    Science.gov (United States)

    Llerena, Francesc; Vilella, Cori

    2012-09-01

    In this paper we axiomatize the strong constrained egalitarian solution (Dutta and Ray, 1991) over the class of weak superadditive games using constrained egalitarianism, order-consistency, and converse order-consistency.

  4. Deriving N-soliton solutions via constrained flows

    OpenAIRE

    Zeng, Yunbo

    2000-01-01

    The soliton equations can be factorized by two commuting x- and t-constrained flows. We propose a method to derive N-soliton solutions of soliton equations directly from the x- and t-constrained flows.

  5. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  6. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes....... Numerical results for the capacities are presented....

  7. An axiomatic characterization of the strong constrained egalitarian solution

    OpenAIRE

    Llerena Garrés, Francesc; Vilella Bach, Misericòrdia

    2012-01-01

    In this paper we axiomatize the strong constrained egalitarian solution (Dutta and Ray, 1991) over the class of weak superadditive games using constrained egalitarianism, order-consistency, and converse order-consistency. JEL classification: C71, C78. Keywords: Cooperative TU-game, strong constrained egalitarian solution, axiomatization.

  8. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device intended... generic type of device includes prostheses that consist of a single flexible across-the-joint...

  9. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888.3780 Section 888.3780 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  10. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  11. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  12. Incomplete Dirac reduction of constrained Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandre, C., E-mail: chandre@cpt.univ-mrs.fr

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  13. Estimation in chance-constrained problem

    Czech Academy of Sciences Publication Activity Database

    Houda, Michal

    Hradec Králové : Gaudeamus, 2005 - (Skalská, H.), s. 134-139 ISBN 978-80-7041-535-1. [Mathematical Methods in Economics 2005 /23./. Hradec Králové (CZ), 14.09.2005-16.09.2005] R&D Projects: GA ČR GD402/03/H057; GA ČR GA402/04/1294; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance-constrained problem * estimation * economic applications Subject RIV: BB - Applied Statistics, Operational Research

  14. Utility Constrained Energy Minimization In Aloha Networks

    CERN Document Server

    Khodaian, Amir Mahdi; Talebi, Mohammad S

    2010-01-01

    In this paper we consider the issue of energy efficiency in random access networks and show that optimizing transmission probabilities of nodes can enhance network performance in terms of energy consumption and fairness. First, we propose a heuristic power control method that improves throughput, and then we model the Utility Constrained Energy Minimization (UCEM) problem in which the utility constraint takes into account single and multi node performance. UCEM is modeled as a convex optimization problem and Sequential Quadratic Programming (SQP) is used to find optimal transmission probabilities. Numerical results show that our method can achieve fairness, reduce energy consumption and enhance lifetime of such networks.

  15. How peer-review constrains cognition

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2015-01-01

    ‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio-cognitive...... came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper’s knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in...

  16. Constraining Milky Way mass with Hypervelocity Stars

    CERN Document Server

    Fragione, Giacomo

    2016-01-01

    We show that hypervelocity stars (HVSs) ejected from the center of the Milky Way galaxy can be used to constrain the mass of its halo. The asymmetry in the radial velocity distribution of halo stars due to escaping HVSs depends on the halo potential (escape speed) as long as the round trip orbital time is shorter than the stellar lifetime. Adopting a characteristic HVS travel time of $300$ Myr, which corresponds to the average mass of main sequence HVSs ($3.2$ M$_{\\odot}$), we find that current data favors a mass for the Milky Way in the range $(1.2$-$1.7)\\times 10^{12} \\mathrm{M}_\\odot$.

  17. On Types of Observables in Constrained Theories

    CERN Document Server

    Anderson, Edward

    2016-01-01

    The Kuchar observables notion is shown to apply only to a limited range of theories. Relational mechanics, slightly inhomogeneous cosmology and supergravity are used as examples that require further notions of observables. A suitably general notion of A-observables is then given to cover all of these cases. `A' here stands for `algebraic substructure'; A-observables can be defined by association with each closed algebraic substructure of a theory's constraints. Both constrained algebraic structures and associated notions of A-observables form bounded lattices.

  18. Constrained control problems of discrete processes

    CERN Document Server

    Phat, Vu Ngoc

    1996-01-01

    The book gives a novel treatment of recent advances on constrained control problems with emphasis on the controllability, reachability of dynamical discrete-time systems. The new proposed approach provides the right setting for the study of qualitative properties of general types of dynamical systems in both discrete-time and continuous-time systems with possible applications to some control engineering models. Most of the material appears for the first time in a book form. The book is addressed to advanced students, postgraduate students and researchers interested in control system theory and

  19. ADAPTIVE SUBOPTIMAL CONTROL OF INPUT CONSTRAINED PLANTS

    Directory of Open Access Journals (Sweden)

    Valerii Azarskov

    2011-03-01

    Full Text Available Abstract. This paper deals with adaptive regulation of a discrete-time linear time-invariant plant witharbitrary bounded disturbances whose control input is constrained to lie within certain limits. The adaptivecontrol algorithm exploits the one-step-ahead control strategy and the gradient projection type estimationprocedure using the modified dead zone. The convergence property of the estimation algorithm is shown tobe ensured. The sufficient conditions guaranteeing the global asymptotical stability and simultaneously thesuboptimality of the closed-loop systems are derived. Numerical examples and simulations are presented tosupport the theoretical results.

  20. Incomplete Dirac reduction of constrained Hamiltonian systems

    International Nuclear Information System (INIS)

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified

  1. Can Neutron stars constrain Dark Matter?

    DEFF Research Database (Denmark)

    Kouvaris, Christoforos; Tinyakov, Peter

    2010-01-01

    We argue that observations of old neutron stars can impose constraints on dark matter candidates even with very small elastic or inelastic cross section, and self-annihilation cross section. We find that old neutron stars close to the galactic center or in globular clusters can maintain a surface...... temperature that could in principle be detected. Due to their compactness, neutron stars can acrete WIMPs efficiently even if the WIMP-to-nucleon cross section obeys the current limits from direct dark matter searches, and therefore they could constrain a wide range of dark matter candidates....

  2. 3D constrained inversion of geophysical and geological information applying Spatial Mutually Constrained Inversion.

    Science.gov (United States)

    Nielsen, O. F.; Ploug, C.; Mendoza, J. A.; Martínez, K.

    2009-05-01

    The need for increaseding accuracy and reduced ambiguities in the inversion results has resulted in focus on the development of more advanced inversion methods of geophysical data. Over the past few years more advanced inversion techniques have been developed to improve the results. Real 3D-inversion is time consuming and therefore often not the best solution in a cost-efficient perspective. This has motivated the development of 3D constrained inversions, where 1D-models are constrained in 3D, also known as a Spatial Constrained Inversion (SCI). Moreover, inversion of several different data types in one inversion has been developed, known as Mutually Constrained Inversion (MCI). In this paper a presentation of a Spatial Mutually Constrained Inversion method (SMCI) is given. This method allows 1D-inversion applied to different geophysical datasets and geological information constrained in 3D. Application of two or more types of geophysical methods in the inversion has proved to reduce the equivalence problem and to increase the resolution in the inversion results. The use of geological information from borehole data or digital geological models can be integrated in the inversion. In the SMCI, a 1D inversion code is used to model soundings that are constrained in three dimensions according to their relative position in space. This solution enhances the accuracy of the inversion and produces distinct layers thicknesses and resistivities. It is very efficient in the mapping of a layered geology but still also capable of mapping layer discontinuities that are, in many cases, related to fracturing and faulting or due to valley fills. Geological information may be included in the inversion directly or used only to form a starting model for the individual soundings in the inversion. In order to show the effectiveness of the method, examples are presented from both synthetic data and real data. The examples include DC-soundings as well as land-based and airborne TEM

  3. Lepton Flavour Violation in the Constrained MSSM with Constrained Sequential Dominance

    CERN Document Server

    Antusch, Stefan

    2008-01-01

    We consider charged Lepton Flavour Violation (LFV) in the Constrained Minimal Supersymmetric Standard Model, extended to include the see-saw mechanism with Constrained Sequential Dominance (CSD), where CSD provides a natural see-saw explanation of tri-bimaximal neutrino mixing. When charged lepton corrections to tri-bimaximal neutrino mixing are included, we discover characteristic correlations among the LFV branching ratios, depending on the mass ordering of the right-handed neutrinos, with a pronounced dependence on the leptonic mixing angle $\\theta_{13}$ (and in some cases also on the Dirac CP phase $\\delta$).

  4. The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms

    Institute of Scientific and Technical Information of China (English)

    HE Zhong-qiu; LI Dao-ben

    2003-01-01

    This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1~3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.

  5. Remarks on the maximum correlation coefficient

    OpenAIRE

    Dembo, Amir; Kagan, Abram; Shepp, Lawrence A.

    2001-01-01

    The maximum correlation coefficient between partial sums of independent and identically distributed random variables with finite second moment equals the classical (Pearson) correlation coefficient between the sums, and thus does not depend on the distribution of the random variables. This result is proved, and relations between the linearity of regression of each of two random variables on the other and the maximum correlation coefficient are discussed.

  6. The maximum entropy technique. System's statistical description

    CERN Document Server

    Belashev, B Z

    2002-01-01

    The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered

  7. Probalistic logic programming under maximum entropy

    OpenAIRE

    Lukasiewicz, Thomas; Kern-Isberner, Gabriele

    1999-01-01

    In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a main result of this paper, we introduce an efficient technique for approximative p...

  8. Maximum confidence measurements via probabilistic quantum cloning

    International Nuclear Information System (INIS)

    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states. In this paper, we show that if incorrect copies are allowed to be produced, linearly dependent quantum states may also be cloned by the PQC. By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states, we derive the upper bound of the maximum confidence measure of a set. An explicit transformation of the maximum confidence measure is presented

  9. Linear inverse problems the maximum entropy connection

    CERN Document Server

    Gzyl, Henryk

    2011-01-01

    This book describes a useful tool for solving linear inverse problems subject to convex constraints. The method of maximum entropy in the mean automatically takes care of the constraints. It consists of a technique for transforming a large dimensional inverse problem into a small dimensional non-linear variational problem. A variety of mathematical aspects of the maximum entropy method are explored as well. Supplementary materials are not included with eBook edition (CD-ROM)

  10. Simulated Maximum Likelihood using Tilted Importance Sampling

    OpenAIRE

    Christian N. Brinch

    2008-01-01

    Abstract: This paper develops the important distinction between tilted and simple importance sampling as methods for simulating likelihood functions for use in simulated maximum likelihood. It is shown that tilted importance sampling removes a lower bound to simulation error for given importance sample size that is inherent in simulated maximum likelihood using simple importance sampling, the main method for simulating likelihood functions in the statistics literature. In addit...

  11. Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression

    OpenAIRE

    Bera, A. K.; Galvao Jr, A. F.; Montes-Rojas, G.; Park, S. Y.

    2010-01-01

    This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters toge...

  12. Constraining the braking indices of magnetars

    Science.gov (United States)

    Gao, Z. F.; Li, X.-D.; Wang, N.; Yuan, J. P.; Wang, P.; Peng, Q. H.; Du, Y. J.

    2016-02-01

    Because of the lack of long-term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index n of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of the mean braking indices of eight magnetars with SNRs, and find that they cluster in the range of 1-42. Five magnetars have smaller mean braking indices of 1 wind-aided braking. The larger mean braking indices of n > 3 for the other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with 1 3, within the updated magneto-thermal evolution models. Although the constrained range of the magnetars' braking indices is tentative, as a result of the uncertainties in the SNR ages due to distance uncertainties and the unknown conditions of the expanding shells, our method provides an effective way to constrain the magnetars' braking indices if the measurements of the SNR ages are reliable, which can be improved by future observations.

  13. Constraining the Braking Indices of Magnetars

    CERN Document Server

    Gao, Z F; Wang, N; Yuan, J P; Peng, Q H; Du, Y J

    2015-01-01

    Due to the lack of long term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index $n$ of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of $n$ of nine magnetars with SNRs, and find that they cluster in a range of $1\\sim$41. Six magnetars have smaller braking indices of $13$ for other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with $13$ within the updated magneto-thermal evolution models. We point out that there could be some connections between the magnetar's anti-glitch event and its braking index, and the magnitude of $n$ should be taken into account when explaining the event. Although the constrained range of the magnetars' braking indices is tentative, our method provides an effective way to constrain the magnetars' braking indices if th...

  14. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  15. Constraining dark matter through 21-cm observations

    Science.gov (United States)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  16. Constraining the braneworld with gravitational wave observations

    CERN Document Server

    McWilliams, Sean T

    2009-01-01

    Braneworld models containing large extra dimensions may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model, the asymptotic AdS radius of curvature of the extra dimension supports a single bound state of the massless graviton on the brane, thereby avoiding gross violations of Newton's law. However, one possible consequence of this model is an enormous increase in the amount of Hawking radiation emitted by black holes. This consequence has been employed by other authors to attempt to constrain the AdS radius of curvature through the observation of black holes. I present two novel methods for constraining the AdS curvature. The first method results from the effect of this enhanced mass loss on the event rate for extreme mass ratio inspirals (EMRIs) detected by the space-based LISA interferometer. The second method results from the observation of an individually resolvable galactic black hole binary with LISA. I show that the ...

  17. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  18. Constraining the halo mass function with observations

    Science.gov (United States)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-08-01

    The abundances of dark matter halos in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the halo mass function. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  19. SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH

    Directory of Open Access Journals (Sweden)

    Pandya A M

    2011-04-01

    Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70

  20. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  1. A multi-level solver for Gaussian constrained CMB realizations

    CERN Document Server

    Seljebotn, D S; Jewell, J B; Eriksen, H K; Bull, P

    2013-01-01

    We present a multi-level solver for drawing constrained Gaussian realizations or finding the maximum likelihood estimate of the CMB sky, given noisy sky maps with partial sky coverage. The method converges substantially faster than existing Conjugate Gradient (CG) methods for the same problem. For instance, for the 143 GHz Planck frequency channel, only 3 multi-level W-cycles result in an absolute error smaller than 1 microkelvin in any pixel. Using 16 CPU cores, this translates to a computational expense of 6 minutes wall time per realization, plus 8 minutes wall time for a power spectrum-dependent precomputation. Each additional W-cycle reduces the error by more than an order of magnitude, at an additional computational cost of 2 minutes. For comparison, we have never been able to achieve similar absolute convergence with conventional CG methods for this high signal-to-noise data set, even after thousands of CG iterations and employing expensive preconditioners. The solver is part of the Commander 2 code, w...

  2. Maximum magnitude earthquakes induced by fluid injection

    Science.gov (United States)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  3. A constrained-transport magnetohydrodynamics algorithm with near-spectral resolution

    CERN Document Server

    Maron, Jason; Oishi, Jeffrey

    2007-01-01

    Numerical simulations including magnetic fields have become important in many fields of astrophysics. Evolution of magnetic fields by the constrained transport algorithm preserves magnetic divergence to machine precision, and thus represents one preferred method for the inclusion of magnetic fields in simulations. We show that constrained transport can be implemented with volume-centered fields and hyperresistivity on a high-order finite difference stencil. Additionally, the finite-difference coefficients can be tuned to enhance high-wavenumber resolution. Similar techniques can be used for the interpolations required for dealiasing corrections at high wavenumber. Together, these measures yield an algorithm with a wavenumber resolution that approaches the theoretical maximum achieved by spectral algorithms. Because this algorithm uses finite differences instead of fast Fourier transforms, it runs faster and isn't restricted to periodic boundary conditions. Also, since the finite differences are spatially loca...

  4. Perceived visual speed constrained by image segmentation

    Science.gov (United States)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  5. Sampling Motif-Constrained Ensembles of Networks

    Science.gov (United States)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  6. Sampling motif-constrained ensembles of networks

    CERN Document Server

    Fischer, Rico; Peixoto, Tiago P; Altmann, Eduardo G

    2015-01-01

    The statistical significance of network properties is conditioned on null models which satisfy spec- ified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency, or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this paper we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, net- works with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  7. Constraining dark sectors with monojets and dijets

    International Nuclear Information System (INIS)

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  8. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.

  9. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  10. How alive is constrained SUSY really?

    CERN Document Server

    Bechtle, Philip; Dreiner, Herbert K; Hamer, Matthias; Krämer, Michael; O'Leary, Ben; Porod, Werner; Sarrazin, Björn; Stefaniak, Tim; Uhlenbrock, Mathias; Wienemann, Peter

    2014-01-01

    Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Using the framework Fittino, we perform global fits of the CMSSM to the toy data and find that this model is excluded at more than 95% confidence level.

  11. A Constrained Tectonics Model for Coronal Heating

    CERN Document Server

    Ng, C S; 10.1086/525518

    2011-01-01

    An analytical and numerical treatment is given of a constrained version of the tectonics model developed by Priest, Heyvaerts, & Title [2002]. We begin with an initial uniform magnetic field ${\\bf B} = B_0 \\hat{\\bf z}$ that is line-tied at the surfaces $z = 0$ and $z = L$. This initial configuration is twisted by photospheric footpoint motion that is assumed to depend on only one coordinate ($x$) transverse to the initial magnetic field. The geometric constraints imposed by our assumption precludes the occurrence of reconnection and secondary instabilities, but enables us to follow for long times the dissipation of energy due to the effects of resistivity and viscosity. In this limit, we demonstrate that when the coherence time of random photospheric footpoint motion is much smaller by several orders of magnitude compared with the resistive diffusion time, the heating due to Ohmic and viscous dissipation becomes independent of the resistivity of the plasma. Furthermore, we obtain scaling relations that su...

  12. Constraining decaying dark matter with neutron stars

    CERN Document Server

    Perez-Garcia, M Angeles

    2015-01-01

    We propose that the existing population of neutron stars in the galaxy can help constrain the nature of decaying dark matter. The amount of decaying dark matter, accumulated in the central regions in neutron stars and the energy deposition rate from decays, may set a limit on the neutron star survival rate against transitions to more compact stars and, correspondingly, on the dark matter particle decay time, $\\tau_{\\chi}$. We find that for lifetimes ${\\tau_{\\chi}}\\lesssim 6.3\\times 10^{15}$ s, we can exclude particle masses $(m_{\\chi}/ \\rm TeV) \\gtrsim 50$ or $(m_{\\chi}/ \\rm TeV) \\gtrsim 8 \\times 10^2$ in the bosonic and fermionic cases, respectively. In addition, we also compare our findings with the present status of allowed phase space regions using kinematical variables for decaying dark matter, obtaining complementary results.

  13. On Quantum Communication Channels with Constrained Inputs

    CERN Document Server

    Holevo, A S

    1997-01-01

    The purpose of this work is to extend the result of previous papers quant-ph/9611023, quant-ph/9703013 to quantum channels with additive constraints onto the input signal, by showing that the capacity of such channel is equal to the supremum of the entropy bound with respect to all apriori distributions satisfying the constraint. We also make an extension to channels with continuous alphabet. As an application we prove the formula for the capacity of the quantum Gaussian channel with constrained energy of the signal, establishing the asymptotic equivalence of this channel to the semiclassical photon channel. We also study the lower bounds for the reliability function of the pure-state Gaussian channel.

  14. Disappearance and Creation of Constrained Amorphous Phase

    Science.gov (United States)

    Cebe, Peggy; Lu, Sharon X.

    1997-03-01

    We report observation of the disappearance and recreation of rigid, or constrained, amorphous phase by sequential thermal annealing. Tempera- ture modulated differential scanning calorimetry (MDSC) is used to study the glass transition and lower melting endotherm after annealing. Cold crystallization of poly(phenylene sulfide), PPS, at a temperature just above Tg creates an initial large fraction of rigid amorphous phase (RAP). Brief, rapid annealing to a higher temperature causes RAP almost to disappear completely. Subsequent reannealing at the original lower temperature restores RAP to its original value. At the same time that RAP is being removed, Tg decreases; when RAP is restored, Tg also returns to its initial value. The crystal fraction remains unaffected by the annealing sequence.

  15. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both...

  16. Statistical mechanics of budget-constrained auctions

    Science.gov (United States)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  17. Statistical mechanics of budget-constrained auctions

    International Nuclear Information System (INIS)

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise

  18. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  19. Constraining dark sectors with monojets and dijets

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); McCullough, Matthew [European Organization for Nuclear Research (CERN), Geneva (Switzerland). Theory Div.

    2015-03-15

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  20. Maximum-Bandwidth Node-Disjoint Paths

    Directory of Open Access Journals (Sweden)

    Mostafa H. Dahshan

    2012-03-01

    Full Text Available This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP. The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time

  1. On the Maximum Enstrophy Growth in Burgers Equation

    International Nuclear Information System (INIS)

    The regularity of solutions of the three-dimensional Navier-Stokes equation is controlled by the boundedness of the enstrophy ε. The best estimate available to-date for its rate of growth is dε/dt ≤ Cε3, where C > 0, which was recently found to be sharp by Lu and Doering (2008). Applying straightforward time-integration to this instantaneous estimate leads to the possibility of loss of regularity in finite time, the so-called blow-up, and therefore the central question is to establish sharpness of such finite-time bounds. We consider an analogous problem for Burgers equation which is used as a toy model. The problem of saturation of finite-time estimates for the enstrophy growth is stated as a PDE-constrained optimization problem, where the control variable φ represents the initial condition, which is solved numerically for a wide range of time windows T > 0 and initial enstrophies ε0. We find that the maximum enstrophy growth in finite time scales as ε0α with α ≈ 3/2. The exponent is smaller than α = 3 predicted by analytic means, therefore suggesting lack of sharpness of analytical estimates.

  2. Reconstructing the history of dark energy using maximum entropy

    CERN Document Server

    Zunckel, C

    2007-01-01

    We present a Bayesian technique based on a maximum entropy method to reconstruct the dark energy equation of state $w(z)$ in a non--parametric way. This MaxEnt technique allows to incorporate relevant prior information while adjusting the degree of smoothing of the reconstruction in response to the structure present in the data. After demonstrating the method on synthetic data, we apply it to current cosmological data, separately analysing type Ia supernovae measurement from the HST/GOODS program and the first year Supernovae Legacy Survey (SNLS), complemented by cosmic microwave background and baryonic acoustic oscillations data. We find that the SNLS data are compatible with $w(z) = -1$ at all redshifts $0 \\leq z \\lsim 1100$, with errorbars of order 20% for the most constraining choice of priors and model. The HST/GOODS data exhibit a slight (about $1\\sigma$ significance) preference for $w>-1$ at $z\\sim 0.5$ and a drift towards $w>-1$ at larger redshifts, which however is not robust with respect to changes ...

  3. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.;

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used or...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  4. Maximum confidence measurements via probabilistic quantum cloning

    Institute of Scientific and Technical Information of China (English)

    Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu

    2013-01-01

    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.

  5. Maximum specific runoff; 1 : 2 000 000

    International Nuclear Information System (INIS)

    On this map the maximum specific runoff (map scale 1 : 2 000 000) on the territory of the Slovak Republic are shown. Isolines express the maximum specific runoff (m3 s-1 km-2) with the occurrence probability equalling to once in 100 years. These specific runoffs derive from hydrological orders for the referential period of 1931 - 1980. Processing was based on 140 hydrological orders of catchment with an area under 100 km2 and approximately 40 catchment with an area below 250 km2. (authors)

  6. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu;

    2009-01-01

    The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  7. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  8. Maximum earthquake magnitudes along different sections of the North Anatolian fault zone

    Science.gov (United States)

    Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda

    2016-04-01

    Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.

  9. Probable maximum floods: Making a collective judgment

    International Nuclear Information System (INIS)

    A critical review is presented of current procedures for estimation of the probable maximum flood (PMF). The historical development of the concept and the flaws in current PMF methodology are discussed. The probable maximum flood concept has been criticized by eminent hydrologists on the basis that it violates scientific principles, and has been questioned from a philosophical viewpoint particularly with regard to the implications of a no-risk criterion. The PMF is not a probable maximum flood, and is less by an arbitrary amount. A more appropriate term would be 'conceivable catastrophic flood'. The methodology for estimating probable maximum precipitation is reasonably well defined and has to a certain extent been verified. The methodology for estimating PMF is not well defined and has not been verified. The use of the PMF concept primarily reflects a need for engineering expediency and does not meet the standards for scientific truth. As the PMF is an arbitrary concept, collective judgement is an important component of making PMF estimates. The Canadian Dam Safety Association should play a leading role in developing guidelines and standards. 18 refs

  10. Weak scale from the maximum entropy principle

    International Nuclear Information System (INIS)

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage Srad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether Srad actually becomes maximum at the observed values. In this paper, we regard Srad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh=O(300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh∼TBBN2/(Mplye5), where ye is the Yukawa coupling of electron, TBBN is the temperature at which the Big Bang nucleosynthesis starts, and Mpl is the Planck mass

  11. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  12. Weak Scale From the Maximum Entropy Principle

    CERN Document Server

    Hamada, Yuta; Kawana, Kiyoharu

    2015-01-01

    The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\

  13. 5 CFR 1600.22 - Maximum contributions.

    Science.gov (United States)

    2010-01-01

    ... election. (3) A participant who has both a civilian and a uniformed services account can make catch-up... contribution will be limited only by the provisions of the Internal Revenue Code (26 U.S.C.). (2) CSRS and uniformed services percentage limit. The maximum employee contribution from basic pay for a CSRS...

  14. Maximum Phonation Time: Variability and Reliability

    NARCIS (Netherlands)

    R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings

    2010-01-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v

  15. Connectome graphs and maximum flow problems

    OpenAIRE

    Daugulis, Peteris

    2014-01-01

    We propose to study maximum flow problems for connectome graphs. We suggest a few computational problems: finding vertex pairs with maximal flow, finding new edges which would increase the maximal flow. Initial computation results for some publicly available connectome graphs are described.

  16. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.; Bach Andersen, J.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum...

  17. Maximum Possible Transverse Velocity in Special Relativity.

    Science.gov (United States)

    Medhekar, Sarang

    1991-01-01

    Using a physical picture, an expression for the maximum possible transverse velocity and orientation required for that by a linear emitter in special theory of relativity has been derived. A differential calculus method is also used to derive the expression. (Author/KR)

  18. Comparing maximum pressures in internal combustion engines

    Science.gov (United States)

    Sparrow, Stanwood W; Lee, Stephen M

    1922-01-01

    Thin metal diaphragms form a satisfactory means for comparing maximum pressures in internal combustion engines. The diaphragm is clamped between two metal washers in a spark plug shell and its thickness is chosen such that, when subjected to explosion pressure, the exposed portion will be sheared from the rim in a short time.

  19. On maximum cycle packings in polyhedral graphs

    Directory of Open Access Journals (Sweden)

    Peter Recht

    2014-04-01

    Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.

  20. On maximum cycle packings in polyhedral graphs

    OpenAIRE

    Peter Recht; Stefan Stehling

    2014-01-01

    This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.

  1. Instance optimality of the adaptive maximum strategy

    OpenAIRE

    Diening, Lars; Kreuzer, Christian; Stevenson, Rob

    2013-01-01

    In this paper, we prove that the standard adaptive finite element method with a (modified) `maximum marking strategy' is `instance optimal' for the `total error', being the sum of the energy error and the oscillation. This result will be derived in the model setting of Poisson's equation on a polygon, linear finite elements, and conforming triangulations created by newest vertex bisection.

  2. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the...

  3. The 2011 Northern Hemisphere Solar Maximum

    Science.gov (United States)

    Altrock, Richard C.

    2013-01-01

    Altrock (1997, Solar Phys. 170, 411) discusses a process in which Fe XIV 530.3 nm emission features appear at high latitudes and gradually migrate towards the equator, merging with the sunspot "butterfly diagram". In cycles 21 - 23 solar maximum occurred when the number of Fe XIV emission regions per day > 0.19 (averaged over 365 days and both hemispheres) first reached latitudes 18°, 21° and 21°, for an average of 20° ± 1.7°. Another high-latitude process is the "Rush to the Poles" of polar crown prominences and their associated coronal emission, including Fe XIV. The Rush is a harbinger of solar maximum (cf. Altrock, 2003, Solar Phys. 216, 343). Solar maximum in cycles 21 - 23 occurred when the center line of the Rush reached a critical latitude. These latitudes were 76°, 74° and 78°, respectively, for an average of 76° ± 2°. Cycle 24 displays an intermittent Rush that is only well-defined in the northern hemisphere. In 2009 an initial slope of 4.6°/yr was found in the north, compared to an average of 9.4 ± 1.7 °/yr in the previous three cycles. However, in 2010 the slope increased to 7.5°/yr. Extending that rate to 76° ± 2° indicates that the solar maximum smoothed sunspot number in the northern hemisphere already occurred at 2011.6 ± 0.3. In the southern hemisphere the Rush is very poorly defined. A linear fit to several maxima would reach 76° in the south at 2014.2. In 1999, persistent Fe XIV coronal emission connected with the ESC appeared near 70° in the north and began migrating towards the equator at a rate 40% slower than the previous two solar cycles. A fit to the early ESC would not reach 20° until 2019.8. However, in 2009 and 2010 an acceleration occurred. Currently the greatest number of emission regions is at 21° in the north and 24°in the south. This indicates that solar maximum is occurring now in the north but not yet in the south. The latest global smoothed sunspot numbers show an inflection point in late 2011, which

  4. Accelerated gradient methods for constrained image deblurring

    International Nuclear Information System (INIS)

    In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The effectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.

  5. Maximum entropy distribution of stock price fluctuations

    Science.gov (United States)

    Bartiromo, Rosario

    2013-04-01

    In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.

  6. Hard Instances of the Constrained Discrete Logarithm Problem

    OpenAIRE

    Mironov, Ilya; Mityagin, Anton; Nissim, Kobbi

    2006-01-01

    The discrete logarithm problem (DLP) generalizes to the constrained DLP, where the secret exponent $x$ belongs to a set known to the attacker. The complexity of generic algorithms for solving the constrained DLP depends on the choice of the set. Motivated by cryptographic applications, we study sets with succinct representation for which the constrained DLP is hard. We draw on earlier results due to Erd\\"os et al. and Schnorr, develop geometric tools such as generalized Menelaus' theorem for ...

  7. A simple procedure for computing strong constrained egalitarian allocations

    OpenAIRE

    Francesc Llerena; Carles Rafels; Cori Vilella

    2015-01-01

    This paper deals with the strong constrained egalitarian solution introduced by Dutta and Ray (1991). We show that this solution yields the weak constrained egalitarian allocations (Dutta and Ray, 1989) associated to a finite family of convex games. This relationship makes it possible to define a systematic way of computing the strong constrained egalitarian allocations for any arbitrary game, using the well-known Dutta-Rayís algorithm for convex games. We also characterize non-emptiness and ...

  8. Transient stability-constrained optimal power flow

    OpenAIRE

    Bettiol, Arlan; Ruiz-Vega, Daniel; Ernst, Damien; Wehenkel, Louis; Pavella, Mania

    1999-01-01

    This paper proposes a new approach able to maximize the interface flow limits in power systems and to find a new operating state that is secure with respect to both, dynamic (transient stability) and static security constraints. It combines the Maximum Allowable Transfer (MAT) method, recently developed for the simultaneous control of a set of contingencies, and an Optimal Power Flow (OPF) method for maximizing the interface power flow. The approach and its performances are illustrated by ...

  9. Constrained Subjective Assessment of Student Learning

    Science.gov (United States)

    Saliu, Sokol

    2005-09-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a past course in microcomputer system design (MSD). CQA criteria were articulated in fuzzy terms and sets, and the assessment procedure was cast as a fuzzy inference rule base. An interactive graphic interface provided for transparent assessment, student "backwash," and support to the teacher when compiling the tests. Grade intervals, obtained from a departmental poll, were used to compile a fuzzy "grade" set. Assessment results were compared to those of a former standard method and to those of a modified version of it (but with fewer criteria). The three methods yielded similar results, supporting the application of CQA. The method improved assessment reliability by means of the consensus embedded in the fuzzy grade set, and improved assessment validity by integrating fuzzy criteria into the assessment procedure.

  10. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  11. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion

    Science.gov (United States)

    Karabanov, A.; Wiśniewski, D.; Lesanovsky, I.; Köckenberger, W.

    2015-07-01

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP—so-called solid effect DNP—which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications.

  12. Constrained filter optimization for subsurface landmine detection

    Science.gov (United States)

    Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik

    2006-05-01

    Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.

  13. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  14. Testing constrained sequential dominance models of neutrinos

    Science.gov (United States)

    Björkeroth, Fredrik; King, Stephen F.

    2015-12-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.

  15. Constraining the Oblateness of Kepler Planets

    CERN Document Server

    Zhu, Wei; Zhou, George; Lin, D N C

    2014-01-01

    We use Kepler short cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside of the solar system, measuring an oblateness of $0.22 \\pm 0.11$ for the 18 $M_J$ mass brown dwarf Kepler 39b (KOI-423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI-686.01, and KOI-197.01 to be < 0.067, < 0.251, and < 0.186, respectively. Using the Q'-values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI-686.01 and KOI-197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI-423.01 (Kepler 39b) suggests that the Q'-value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants ...

  16. The Constrained Exceptional Supersymmetric Standard Model

    CERN Document Server

    Athron, P; Miller, D J; Moretti, S; Nevzorov, R

    2009-01-01

    We propose and study a constrained version of the Exceptional Supersymmetric Standard Model (E6SSM), which we call the cE6SSM, based on a universal high energy scalar mass m_0, trilinear scalar coupling A_0 and gaugino mass M_{1/2}. We derive the Renormalisation Group (RG) Equations for the cE6SSM, including the extra U(1)_{N} gauge factor and the low energy matter content involving three 27 representations of E6. We perform a numerical RG analysis for the cE6SSM, imposing the usual low energy experimental constraints and successful Electro-Weak Symmetry Breaking (EWSB). Our analysis reveals that the sparticle spectrum of the cE6SSM involves a light gluino, two light neutralinos and a light chargino. Furthermore, although the squarks, sleptons and Z' boson are typically heavy, the exotic quarks and squarks can also be relatively light. We finally specify a set of benchmark points which correspond to particle spectra, production modes and decay patterns peculiar to the cE6SSM, altogether leading to spectacular...

  17. Constraining the oblateness of Kepler planets

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Wei [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Huang, Chelsea X. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Zhou, George [Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston Creek, ACT 2611 (Australia); Lin, D. N. C., E-mail: weizhu@astronomy.ohio-state.edu [UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States)

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  18. How Constrained is the cMSSM?

    CERN Document Server

    Ghosh, Diptimoy; Raychaudhuri, Sreerup; Sengupta, Dipan

    2012-01-01

    We study the allowed parameter space of the constrained minimal supersymmetric Standard Model (cMSSM) in the light of direct searches, constraints from $B$-physics (including the recent measurement of the branching ratio for $B_s \\to \\mu^+\\mu^-$) and the dark matter relic density. For low or moderate values of $\\tan\\beta$, the strongest constraints are those imposed by direct searches, and therefore, large areas of the parameter space are still allowed. In the large $\\tan \\beta$ limit, however, the $B$-physics constraints are more restrictive, effectively forcing the squark and gluino masses to lie close to or above a TeV. A light Higgs boson could dramatically change the allowed parameter space, but we need to know its mass precisely for this to be effective. We emphasize that it is still too early to write off the cMSSM, even in the large $\\tan\\beta$ limit. Finally we explore strategies to extend the LHC search for cMSSM signals beyond the present reach of the ATLAS and CMS Collaborations.

  19. Constraining the Properties of Cold Interstellar Clouds

    Science.gov (United States)

    Spraggs, Mary Elizabeth; Gibson, Steven J.

    2016-01-01

    Since the interstellar medium (ISM) plays an integral role in star formation and galactic structure, it is important to understand the evolution of clouds over time, including the processes of cooling and condensation that lead to the formation of new stars. This work aims to constrain and better understand the physical properties of the cold ISM by utilizing large surveys of neutral atomic hydrogen (HI) 21cm spectral line emission and absorption, carbon monoxide (CO) 2.6mm line emission, and multi-band infrared dust thermal continuum emission. We identify areas where the gas may be cooling and forming molecules using HI self-absorption (HISA), in which cold foreground HI absorbs radiation from warmer background HI emission.We are developing an algorithm that uses total gas column densities inferred from Planck and other FIR/sub-mm data in parallel with CO and HISA spectral line data to determine the gas temperature, density, molecular abundance, and other properties as functions of position. We can then map these properties to study their variation throughout an individual cloud as well as any dependencies on location or environment within the Galaxy.Funding for this work was provided by the National Science Foundation, the NASA Kentucky Space Grant Consortium, the WKU Ogden College of Science and Engineering, and the Carol Martin Gatton Academy for Mathematics and Science in Kentucky.

  20. Constraining New Physics with D meson decays

    International Nuclear Information System (INIS)

    Latest Lattice results on D form factors evaluation from first principles show that the Standard Model (SM) branching ratios prediction for the leptonic Ds→ℓνℓ decays and the semileptonic SM branching ratios of the D0 and D+ meson decays are in good agreement with the world average experimental measurements. It is possible to disprove New Physics hypothesis or find bounds over several models beyond the SM. Using the observed leptonic and semileptonic branching ratios for the D meson decays, we performed a combined analysis to constrain non-standard interactions which mediate the cs¯→lν¯ transition. This is done either by a model-independent way through the corresponding Wilson coefficients or in a model-dependent way by finding the respective bounds over the relevant parameters for some models beyond the Standard Model. In particular, we obtain bounds for the Two Higgs Doublet Model Type-II and Type III, the Left–Right model, the Minimal Supersymmetric Standard Model with explicit R-parity violation and Leptoquarks. Finally, we estimate the transverse polarization of the lepton in the D0 decay and we found it can be as high as PT=0.23.

  1. Should we still believe in constrained supersymmetry?

    CERN Document Server

    Balázs, Csaba; Carter, Daniel; Farmer, Benjamin; White, Martin

    2012-01-01

    We calculate Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with $M_0$ and $M_{1/2}$ below 2 TeV), reducing its posterior odds by a factor of approximately ...

  2. Constraining Binary Stellar Evolution With Pulsar Timing

    Science.gov (United States)

    Ferdman, Robert D.; Stairs, I. H.; Backer, D. C.; Burgay, M.; Camilo, F.; D'Amico, N.; Demorest, P.; Faulkner, A.; Hobbs, G.; Kramer, M.; Lorimer, D. R.; Lyne, A. G.; Manchester, R.; McLaughlin, M.; Nice, D. J.; Possenti, A.

    2006-06-01

    The Parkes Multibeam Pulsar Survey has yielded a significant number of very interesting binary and millisecond pulsars. Two of these objects are part of an ongoing timing study at the Green Bank Telescope (GBT). PSR J1756-2251 is a double-neutron star (DNS) binary system. It is similar to the original Hulse-Taylor binary pulsar system PSR B1913+16 in its orbital properties, thus providing another important opportunity to test the validity of General Relativity, as well as the evolutionary history of DNS systems through mass measurements. PSR J1802-2124 is part of the relatively new and unstudied "intermediate-mass" class of binary system, which typically have spin periods in the tens of milliseconds, and/or relatively massive (> 0.7 solar masses) white dwarf companions. With our GBT observations, we have detected the Shapiro delay in this system, allowing us to constrain the individual masses of the neutron star and white dwarf companion, and thus the mass-transfer history, in this unusual system.

  3. Constraining the halo mass function with observations

    CERN Document Server

    Castro, Tiago; Quartin, Miguel

    2016-01-01

    The abundances of matter halos in the universe are described by the so-called halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints, while both Euclid and J-PAS can give constraints on the HMF parameters which are comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even mo...

  4. String Theory Origin of Constrained Multiplets

    CERN Document Server

    Kallosh, Renata; Wrase, Timm

    2016-01-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector $A_\\mu$, three complex scalars $\\phi^i$ and four 4d fermions $\\lambda^0$, $\\lambda^i$. These transform, in addition to the more familiar N=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear N=1 multiplets: four chiral multiplets $S$, $Y^i$ that satisfy $S^2=SY^i=0$ and contain the worldvolume fermions $\\lambda^0$ and $\\lambda^i$; and four chiral multiplets $W_\\alpha$, $H^i$ that satisfy $S W_\\alpha=0$ and $S \\bar D_{\\dot \\alpha} \\bar H^{\\bar \\imath}=0$ and contain the vector $A_\\mu$ and the scalars $\\phi^i$. We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet $\\Phi$ that satisfies $S(\\Phi-\\bar \\Phi)=0$, which is particularly interesting for in...

  5. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  6. Optimal performance of constrained control systems

    International Nuclear Information System (INIS)

    This paper presents a method to compute optimal open-loop trajectories for systems subject to state and control inequality constraints in which the cost function is quadratic and the state dynamics are linear. For the case in which inequality constraints are decentralized with respect to the controls, optimal Lagrange multipliers enforcing the inequality constraints may be found at any time through Pontryagin’s minimum principle. In so doing, the set of differential algebraic Euler–Lagrange equations is transformed into a nonlinear two-point boundary-value problem for states and costates whose solution meets the necessary conditions for optimality. The optimal performance of inequality constrained control systems is calculable, allowing for comparison to previous, sub-optimal solutions. The method is applied to the control of damping forces in a vibration isolation system subjected to constraints imposed by the physical implementation of a particular controllable damper. An outcome of this study is the best performance achievable given a particular objective, isolation system, and semi-active damper constraints. (paper)

  7. Distributed Constrained Optimization with Semicoordinate Transformations

    Science.gov (United States)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  8. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ... civil monetary penalties per the Inflation Act. See 74 FR 68701 (December 29, 2009). FRA's maximum and... materials violation was $275. 69 FR 30590, May 28, 2004. To implement these SAFETEA-LU amendments to the maximum and minimum penalties, FRA issued a final rule that was published on December 26, 2006, 71...

  9. Theoretical Analysis of Maximum Flow Declination Rate versus Maximum Area Declination Rate in Phonation

    Science.gov (United States)

    Titze, Ingo R.

    2006-01-01

    Purpose: Maximum flow declination rate (MFDR) in the glottis is known to correlate strongly with vocal intensity in voicing. This declination, or negative slope on the glottal airflow waveform, is in part attributable to the maximum area declination rate (MADR) and in part to the overall inertia of the air column of the vocal tract (lungs to…

  10. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope. PMID:26159097

  11. Nonparametric Maximum Entropy Estimation on Information Diagrams

    CERN Document Server

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn

    2016-01-01

    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  12. Maximum speed of dewetting on a fiber

    CERN Document Server

    Chan, Tak Shing; Snoeijer, Jacco H

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed of dewetting. For all radii we find the maximum speed occurs at vanishing apparent contact angle. To further investigate the transition we numerically determine the bifurcation diagram for steady menisci. It is found that the meniscus profiles on thick fibers are smooth, even when there is a film deposited between the bath and the contact line, while profiles on thin fibers exhibit strong oscillations. We discuss how this could lead to different experimental scenarios of film deposition.

  13. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  14. Zipf's law, power laws and maximum entropy

    Science.gov (United States)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  15. Zipf's law, power laws, and maximum entropy

    CERN Document Server

    Visser, Matt

    2012-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  16. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  17. Throwing and jumping for maximum horizontal range

    CERN Document Server

    Linthorne, N P

    2006-01-01

    Optimum projection angles for achieving maximum horizontal range in throwing and jumping events are considerably less than 45 degrees. This unexpected result arise because an athlete can generate a greater projection velocity at low projection angles than at high angles. The range of a projectile is strongly dependent on projection speed and so the optimum projection angle is biased towards low projection angles. Here we examine the velocity-angle relation and the optimum projection angle in selected throwing and jumping events.

  18. Maximum Profit Configurations of Commercial Engines

    OpenAIRE

    Yiran Chen

    2011-01-01

    An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...

  19. Maximum Economic Recovery of Federal Coal

    OpenAIRE

    Watson, William D; Richard Bernknopf

    1984-01-01

    The federal coal leasing program recently established by the Department of the Interior (DOI) (Federal Register, July 19, 1979) includes a requirement that operators mining federal coal achieve "maximum economic recovery" (MER) of coal from federal leases. The MER requirement, the focus of this paper, has its legislative origins in the Federal Coal Leasing Amendments Act of 1976 which directs that "the Secretary (of Interior) shall evaluate and compare the effects of recovering coal by deep m...

  20. Nonlocal maximum principles for active scalars

    CERN Document Server

    Kiselev, Alexander

    2010-01-01

    Active scalars appear in many problems of fluid dynamics. The most common examples of active scalar equations are 2D Euler, Burgers, and 2D surface quasi-geostrophic equations. Many questions about regularity and properties of solutions of these equations remain open. We develop the idea of nonlocal maximum principle, formulating a more general criterion and providing new applications. The most interesting application is finite time regularization of weak solutions in the supercritical regime.

  1. Regularized Maximum Likelihood for Intrinsic Dimension Estimation

    CERN Document Server

    Gupta, Mithun Das

    2012-01-01

    We propose a new method for estimating the intrinsic dimension of a dataset by applying the principle of regularized maximum likelihood to the distances between close neighbors. We propose a regularization scheme which is motivated by divergence minimization principles. We derive the estimator by a Poisson process approximation, argue about its convergence properties and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators.

  2. Global characterization of the Holocene Thermal Maximum

    OpenAIRE

    Renssen, H.; Seppä, H.; Crosta, X.; H. Goosse; D. M. Roche

    2012-01-01

    We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the influence of variations in orbital parameters and atmospheric greenhouse gases and the early-Holocene deglaciation of the Laurentide Ice sheet (LIS). Considering the LIS deglaciation, we quantify se...

  3. Dynamic Programming, Maximum Principle and Vintage Capital

    OpenAIRE

    Fabbri, Giorgio; Iacopetta, Maurizio

    2007-01-01

    We present an application of the Dynamic Programming (DP) and of the Maximum Principle (MP) to solve an optimization over time when the production function is linear in the stock of capital (Ak model). Two views of capital are considered. In one, which is embraced by the great majority of macroeconomic models, capital is homogeneous and depreciates at a constant exogenous rate. In the other view each piece of capital has its own finite productive life cycle (vintage capital). The interpretatio...

  4. A Convnet for Non-maximum Suppression

    OpenAIRE

    Hosang, J.; Benenson, R.; Schiele, B.

    2015-01-01

    Non-maximum suppression (NMS) is used in virtually all state-of-the-art object detection pipelines. While essential object detection ingredients such as features, classifiers, and proposal methods have been extensively researched surprisingly little work has aimed to systematically address NMS. The de-facto standard for NMS is based on greedy clustering with a fixed distance threshold, which forces to trade-off recall versus precision. We propose a convnet designed to perform NMS of a given s...

  5. The maximum temperatures of interstellar grains

    International Nuclear Information System (INIS)

    The maximum temperature a typical interstellar grain will attain upon absorption of a photon or chemical band formation of molecules on its surface is calculated by considering the exact Debye theory of dielectrics. Other contributions to the specific heats of solids are discussed. It is shown that the use of the approximate Debye theory where Csub(v) is proportional to T will lead to serious errors in the calculation of velocities of desorption of molecules from grain surfaces. (Auth.)

  6. Maximum Estrada Index of Bicyclic Graphs

    CERN Document Server

    Wang, Long; Wang, Yi

    2012-01-01

    Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.

  7. The Maximum Principle for Replicator Equations

    OpenAIRE

    K. Sigmund

    1984-01-01

    By introducing a non-Euclidean metric on the unit simplex, it is possible to identify an interesting class of gradient systems within the ubiquitous "replicator equations" of evolutionary biomathematics. In the case of homogeneous potentials, this leads to maximum principles governing the increase of the average fitness, both in population genetics and in chemical kinetics. This research was carried out as part of the Dynamics of Macrosystems Feasibility Study in the System and Decision ...

  8. On Using Unsatisfiability for Solving Maximum Satisfiability

    OpenAIRE

    Marques-Silva, Joao; Planes, Jordi

    2007-01-01

    Maximum Satisfiability (MaxSAT) is a well-known optimization pro- blem, with several practical applications. The most widely known MAXS AT algorithms are ineffective at solving hard problems instances from practical application domains. Recent work proposed using efficient Boolean Satisfiability (SAT) solvers for solving the MaxSAT problem, based on identifying and eliminating unsatisfiable subformulas. However, these algorithms do not scale in practice. This paper analyzes existing MaxSAT al...

  9. Maximum Principles and some Related Topics

    Czech Academy of Sciences Publication Activity Database

    Marek, Ivo

    Ostrava : ÚGN AV ČR, 2007 - (Blaheta, R.; Starý, J.) ISBN 978-80-86407-12-8. [SNA '07. Seminar on Numerical Analysis. Ostrava (CZ), 22.01.2007-26.01.2007] R&D Projects: GA AV ČR 1ET400300415; GA ČR GA201/02/0595 Institutional research plan: CEZ:AV0Z10300504 Keywords : maximum principle * numerical methods for PDEs * Markov chains

  10. Minimal Length, Friedmann Equations and Maximum Density

    CERN Document Server

    Awad, Adel

    2014-01-01

    Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...

  11. Probable maximum flood estimates in Canada

    International Nuclear Information System (INIS)

    The derivation of the probable maximum flood (PMF) for high hazard dams is one of the most important components of a dam safety program. Ontario Hydro defines the probable maximum flood as a hypothetical flood for a selected location on a given stream whose magnitude is such that there is virtually no chance of being exceeded. Different design assumptions are used in the derivation of PMF by various agencies to reflect historical hydrometeorological events and possible future events, for the geographic area under consideration and for the time period of interest. Details are presented of the design assumptions relating to probable maximum precipitation (PMP) determination for British Columbia Hydro, Alberta Environment and Hydro-Quebec, which is used by the agencies in PMF studies. The computer model used by many of the Canadian agencies in the derivation of PMF is the Streamflow Synthesis and Reservoir Regulation (SSARR) model developed by the U.S. Army Corps of Engineers. The PMP is the most important design input data used in the derivation of PMF, and under and over-estimates of this parameter will significantly affect the PMF. Suggestions to aid in minimizing over- and under-estimation of PMF are presented. 18 refs., 2 figs., 5 tabs

  12. Maximum entropy analysis of cosmic ray composition

    Science.gov (United States)

    Nosek, Dalibor; Ebr, Jan; Vícha, Jakub; Trávníček, Petr; Nosková, Jana

    2016-03-01

    We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the superposition model. We present two examples that demonstrate what consequences can be drawn for energy dependent changes in the primary composition.

  13. Maximum entropy analysis of cosmic ray composition

    CERN Document Server

    Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana

    2016-01-01

    We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...

  14. Maximum-biomass prediction of homofermentative Lactobacillus.

    Science.gov (United States)

    Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei

    2016-07-01

    Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C. PMID:26896862

  15. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  16. Logical consistency and sum-constrained linear models

    NARCIS (Netherlands)

    van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.

    2006-01-01

    A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a

  17. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720 Section 888.3720 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  18. Informed constrained spherical deconvolution (iCSD).

    Science.gov (United States)

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Philips, Wilfried; Leemans, Alexander; Sijbers, Jan

    2015-08-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. However, the voxel sizes used in DW-MRI are relatively large, making DW-MRI prone to significant partial volume effects (PVE). These PVEs can be caused both by complex (e.g. crossing) WM fiber configurations and non-WM tissue, such as gray matter (GM) and cerebrospinal fluid. High angular resolution diffusion imaging methods have been developed to correctly characterize complex WM fiber configurations, but significant non-WM PVEs are also present in a large proportion of WM voxels. In constrained spherical deconvolution (CSD), the full fiber orientation distribution function (fODF) is deconvolved from clinically feasible DW data using a response function (RF) representing the signal of a single coherently oriented population of fibers. Non-WM PVEs cause a loss of precision in the detected fiber orientations and an emergence of false peaks in CSD, more prominently in voxels with GM PVEs. We propose a method, informed CSD (iCSD), to improve the estimation of fODFs under non-WM PVEs by modifying the RF to account for non-WM PVEs locally. In practice, the RF is modified based on tissue fractions estimated from high-resolution anatomical data. Results from simulation and in-vivo bootstrapping experiments demonstrate a significant improvement in the precision of the identified fiber orientations and in the number of false peaks detected under GM PVEs. Probabilistic whole brain tractography shows fiber density is increased in the major WM tracts and decreased in subcortical GM regions. The iCSD method significantly improves the fiber orientation estimation at the WM-GM interface, which is especially important in connectomics, where the connectivity between GM regions is analyzed. PMID:25660002

  19. Constraining the Evolution of Poor Clusters

    Science.gov (United States)

    Broming, Emma J.; Fuse, C. R.

    2012-01-01

    There currently exists no method by which to quantify the evolutionary state of poor clusters (PCs). Research by Broming & Fuse (2010) demonstrated that the evolution of Hickson compact groups (HCGs) are constrained by the correlation between the X-ray luminosities of point sources and diffuse gas. The current investigation adopts an analogous approach to understanding PCs. Plionis et al. (2009) proposed a theory to define the evolution of poor clusters. The theory asserts that cannibalism of galaxies causes a cluster to become more spherical, develop increased velocity dispersion and increased X-ray temperature and gas luminosity. Data used to quantify the evolution of the poor clusters were compiled across multiple wavelengths. The sample includes 162 objects from the WBL catalogue (White et al. 1999), 30 poor clusters in the Chandra X-ray Observatory archive, and 15 Abell poor clusters observed with BAX (Sadat et al. 2004). Preliminary results indicate that the cluster velocity dispersion and X-ray gas and point source luminosities can be used to highlight a weak correlation. An evolutionary trend was observed for multiple correlations detailed herein. The current study is a continuation of the work by Broming & Fuse examining point sources and their properties to determine the evolutionary stage of compact groups, poor clusters, and their proposed remnants, isolated ellipticals and fossil groups. Preliminary data suggests that compact groups and their high-mass counterpart, poor clusters, evolve along tracks identified in the X-ray gas - X-ray point source relation. While compact groups likely evolve into isolated elliptical galaxies, fossil groups display properties that suggest they are the remains of fully coalesced poor clusters.

  20. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  1. Constraining blazar physics with polarization signatures

    Science.gov (United States)

    Zhang, Haocheng; Boettcher, Markus; Li, Hui

    2016-01-01

    Blazars are active galactic nuclei whose jets are directed very close to our line of sight. They emit nonthermal-dominated emission from radio to gamma-rays, with the radio to optical emissions known to be polarized. Both radiation and polarization signatures can be strongly variable. Observations have shown that sometimes strong multiwavelength flares are accompanied by drastic polarization variations, indicating active participation of the magnetic field during flares. We have developed a 3D multi-zone time-dependent polarization-dependent radiation transfer code, which enables us to study the spectral and polarization signatures of blazar flares simultaneously. By combining this code with a Fokker-Planck nonthermal particle evolution scheme, we are able to derive simultaneous fits to time-dependent spectra, multiwavelength light curves, and time-dependent optical polarization signatures of a well-known multiwavelength flare with 180 degree polarization angle swing of the blazar 3C279. Our work shows that with detailed consideration of light travel time effects, the apparently symmetric time-dependent radiation and polarization signatures can be naturally explained by a straight, helically symmetric jet pervaded by a helical magnetic field, without the need of any asymmetric structures. Also our model suggests that the excess in the nonthermal particles during flares can originate from magnetic reconnection events, initiated by a shock propagating through the emission region. Additionally, the magnetic field should generally revert to its initial topology after the flare. We conclude that such shock-initiated magnetic reconnection event in an emission environment with relatively strong magnetic energy can be the driver of multiwavelength flares with polarization angle swings. Future statistics on such observations will constrain general features of such events, while magneto-hydrodynamic simulations will provide physical scenarios for the magnetic field evolution

  2. India's emissions in a climate constrained world

    International Nuclear Information System (INIS)

    Scientific studies have repeatedly shown the need to prevent the increase in global emissions so that the planet's average temperature does not exceed 2 deg. C over pre-industrial levels. While the divisions between Annex 1 and non-Annex nations continue to prevent the realization of a comprehensive global climate treaty, all members of the G-20 (incidentally also major emitters) have agreed to prevent the rise in global temperatures above 2 deg. C. This requires that nations consider budgeting their carbon emissions. India presents a unique case study to examine how a major emitter facing a desperate need to increase energy consumption will meet this challenge. The Greenhouse Development Rights (GDR) framework, perhaps considered the most favorable with respect to the responsibility and capacity of India to reduce emissions, was used to explore India's emissions trajectory. India's emissions have been pegged to the pathway required to meet the 2 deg. C target by non-Annex countries. The results have been compared to the expected emissions from 11 energy fuel mix scenarios up to the year 2031 forecasted by the Planning Commission of India. Results reveal that none of the 11 energy scenarios would help India meet its emissions target if it were to follow the 2 deg. C pathway. A thought experiment is followed to explore how India may meet this target. This includes a sensitivity analysis targeting coal consumption, the biggest contributor to India's emissions. - Highlights: → Reducing emissions within a 2 deg. C climate constrained world requires budgeting of global carbon. → The Greenhouse Development Rights Framework can operationalize articles of the UNFCCC. → India's Integrated Energy Policy 11 scenarios reveal that it will exceed the 2 deg. C target. → Framing India's emissions within a 2 deg. C pathway helps envision the required energy mix.

  3. Mars, Moon, Mercury: Magnetometry Constrains Planetary Evolution

    Science.gov (United States)

    Connerney, John E. P.

    2015-04-01

    We have long appreciated that magnetic measurements obtained about a magnetized planet are of great value in probing the deep interior. The existence of a substantial planetary magnetic field implies dynamo action requiring an electrically conducting, fluid core in convective motion and a source of energy to maintain it. Application of the well-known Lowe's spectrum may in some cases identify the dynamo outer radius; where secular variation can be measured, the outer radius can be estimated using the frozen flux approximation. Magnetic induction may be used to probe the electrical conductivity of the mantle and crust. These are useful constraints that together with gravity and/or other observables we may infer the state of the interior and gain insight into planetary evolution. But only recently has it become clear that space magnetometry can do much more, particularly about a planet that once sustained a dynamo that has since disappeared. Mars is the best example of this class: the Mars Global Surveyor spacecraft globally mapped a remanent crustal field left behind after the demise of the dynamo. This map is a magnetic record of the planet's evolution. I will argue that this map may be interpreted to constrain the era of dynamo activity within Mars; to establish the reversal history of the Mars dynamo; to infer the magnetization intensity of Mars crustal rock and the depth of the magnetized crustal layer; and to establish that plate tectonics is not unique to planet Earth, as has so often been claimed. The Lunar magnetic record is in contrast one of weakly magnetized and scattered sources, not easily interpreted as yet in terms of the interior. Magnetometry about Mercury is more difficult to interpret owing to the relatively weak field and proximity to the sun, but MESSENGER (and ultimately Beppi Columbo) may yet map crustal anomalies (induced and/or remanent).

  4. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  5. Modified maximum entropy method in plasma tomography

    International Nuclear Information System (INIS)

    A modified maximum entropy method is proposed for applications to problems of diagnostics of low-temperature plasma. The developed algorithm makes it possible to reconstruct local two-dimensional distributions of coefficients for radiation emission and absorption from their integral characteristics. The reconstruction algorithm is numerically studied on model problems, and its main characteristics are determined. The study of noise stability of the algorithm, i.e., the dependence of the accuracy of the object reconstruction on the noise amplitude in projection data is studied. The results of reconstruction are compared with the results of the widely used method of backward projection with filtration (the Shepp-Logan filter)

  6. Decomposition of spectra using maximum autocorrelation factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2001-01-01

    classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists of the...... fact that given a spectrum we have a natura ln order of the input \\$\\backslash\\$underline{variables}. This is similar to Switzers maximum au tocorrelation factors, where a natural order of \\$\\backslash\\$underline{observations} (pixels) in multispectral images is utilized. However, in order to utilize...

  7. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors, as...

  8. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  9. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  10. A maximum entropy framework for nonexponential distributions

    OpenAIRE

    Peterson, Jack; Dixit, Purushottam D.; Dill, Ken A.

    2013-01-01

    Many statistical distributions, particularly among social and biological systems, have “heavy tails,” which are situations where rare events are not as improbable as would have been guessed from more traditional statistics. Heavy-tailed distributions are the basis for the phrase “the rich get richer.” Here, we propose a basic principle underlying systems with heavy-tailed distributions. We show that it is the same principle (maximum entropy) used in statistical physics and statistics to estim...

  11. Maximum-expectation matching under recourse

    OpenAIRE

    Pedroso, João Pedro; Ikeda, Shiro

    2016-01-01

    This paper addresses the problem of maximizing the expected size of a matching in the case of unreliable vertices and/or edges. The assumption is that upon failure, remaining vertices that have not been matched may be subject to a new assignment. This process may be repeated a given number of times, and the objective is to end with the overall maximum number of matched vertices. The origin of this problem is in kidney exchange programs, going on in several countries, where a vertex is an inco...

  12. On the maximum drawdown during speculative bubbles

    CERN Document Server

    Rotundo, G; Navarra, Mauro; Rotundo, Giulia

    2006-01-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  13. COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT

    Directory of Open Access Journals (Sweden)

    PETRU SERGIU SERBAN

    2016-06-01

    Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.

  14. Dynamical maximum entropy approach to flocking

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  15. Maximum Probability Domains for Hubbard Models

    CERN Document Server

    Acke, Guillaume; Claeys, Pieter W; Van Raemdonck, Mario; Poelmans, Ward; Van Neck, Dimitri; Bultinck, Patrick

    2015-01-01

    The theory of Maximum Probability Domains (MPDs) is formulated for the Hubbard model in terms of projection operators and generating functions for both exact eigenstates as well as Slater determinants. A fast MPD analysis procedure is proposed, which is subsequently used to analyse numerical results for the Hubbard model. It is shown that the essential physics behind the considered Hubbard models can be exposed using MPDs. Furthermore, the MPDs appear to be in line with what is expected from Valence Bond Theory-based knowledge.

  16. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  17. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  18. An Improved Maximum Neural Network with Stochastic Dynamics Characteristic for Maximum Clique Problem

    Science.gov (United States)

    Yang, Gang; Tang, Zheng; Dai, Hongwei

    Through analyzing the dynamics characteristic of maximum neural network with an added vertex, we find that the solution quality is mainly determined by the added vertex weights. In order to increase maximum neural network ability, a stochastic nonlinear self-feedback and flexible annealing strategy are embedded in maximum neural network, which makes the network more powerful to escape local minima and be independent of the initial values. Simultaneously, we present that solving ability of maximum neural network is dependence on problem. We introduce a new parameter into our network to improve the solving ability. The simulation in k random graph and some DIMACS clique instances in the second DIMACS challenge shows that our improved network is superior to other algorithms in light of the solution quality and CPU time.

  19. A new unfolding code combining maximum entropy and maximum likelihood for neutron spectrum measurement

    International Nuclear Information System (INIS)

    We present a new spectrum unfolding code, the Maximum Entropy and Maximum Likelihood Unfolding Code (MEALU), based on the maximum likelihood method combined with the maximum entropy method, which can determine a neutron spectrum without requiring an initial guess spectrum. The Normal or Poisson distributions can be used for the statistical distribution. MEALU can treat full covariance data for a measured detector response and response function. The algorithm was verified through an analysis of mock-up data and its performance was checked by applying it to measured data. The results for measured data from the Joyo experimental fast reactor were also compared with those obtained by the conventional J-log method for neutron spectrum adjustment. It was found that MEALU has potential advantages over conventional methods with regard to preparation of a priori information and uncertainty estimation. (author)

  20. Bandwidth Constrained Multi-interface Networks

    Science.gov (United States)

    D'Angelo, Gianlorenzo; di Stefano, Gabriele; Navarra, Alfredo

    In heterogeneous networks, devices can communicate by means of multiple wired or wireless interfaces. By switching among interfaces or by combining the available interfaces, each device might establish several connections. A connection is established when the devices at its endpoints share at least one active interface. Each interface is assumed to require an activation cost, and provides a communication bandwidth. In this paper, we consider the problem of activating the cheapest set of interfaces among a network G = (V,E) in order to guarantee a minimum bandwidth B of communication between two specified nodes. Nodes V represent the devices, edges E represent the connections that can be established. In practical cases, a bounded number k of different interfaces among all the devices can be considered. Despite this assumption, the problem turns out to be NP-hard even for small values of k and Δ, where Δ is the maximum degree of the network. In particular, the problem is NP-hard for any fixed k ≥ 2 and Δ ≥ 3, while it is polynomially solvable when k = 1, or Δ ≤ 2 and k = O(1). Moreover, we show that the problem is not approximable within ηlogB or Ω(loglog|V|) for any fixed k ≥ 3, Δ ≥ 3, and for a certain constant η, unless P={NP}. We then provide an approximation algorithm with ratio guarantee of b max , where b max is the maximum communication bandwidth allowed among all the available interfaces. Finally, we focus on particular cases by providing complexity results and polynomial algorithms for Δ ≤ 2.

  1. States of Maximum Thermodynamic Efficiency In Daisyworld

    Science.gov (United States)

    Pujol, T.

    Daisyworld is the simplest example used to illustrate the implications of the Gaia hypothesis. The interaction between the environment and the biota follows from the assumption of using daisies with different colours (i.e., albedos) than that of the bare earth. Then, the amount of daisies may modify the energy absorbed by the planet. In the classical version of Daisyworld, turbulent fluxes adopt a diffusive approximation, which clearly constraints the range of values for the solar insolation from which biota may grow in the planet. Here we apply the maximum entropy principle (MEP) to Daisyworld. We conclude that the MEP sets the maximum range of values for the solar insolation with a non-zero amount of daisies. Outside this range, daisies cannot grow in the planet for any physically realistic heat flux. Inside this range, the distribution of daisies is set to agree with the MEP. The range of values for the solar insolation from which biota stabilises the climate is substantially enlarged in comparison with the classical version of Daisyworld.

  2. Maximum Flux Transition Paths of Conformational Change

    CERN Document Server

    Zhao, Ruijun; Skeel, Robert D

    2009-01-01

    Given two metastable states A and B of a biomolecular system, the problem is to calculate the likely paths of the transition from A to B. Such a calculation is more informative and more manageable if done for a reduced set of collective variables chosen so that paths cluster in collective variable space. The computational task becomes that of computing the "center" of such a cluster. A good way to define the center employs the concept of a committor, whose value at a point in collective variable space is the probability that a trajectory at that point will reach B before A. The committor "foliates" the transition region into a collection of isocommittors. The maximum flux transition path is defined as a path that crosses each isocommittor at a point which (locally) has the highest crossing rate of distinct reactive trajectories. (This path is different from that of the MaxFlux method of Huo and Straub.) To make the calculation tractable, three approximations are introduced. It is shown that a maximum flux tra...

  3. Constraining the margins of Neoproterozoic ice masses: depositional signature, palaeoflow and glaciodynamics

    Science.gov (United States)

    Busfield, Marie; Le Heron, Daniel

    2016-04-01

    The scale and distribution of Neoproterozoic ice masses remains poorly understood. The classic Snowball Earth hypothesis argues for globally extensive ice sheets, separated by small ocean refugia, yet the positions of palaeo-ice sheet margins and the extent of these open water regions are unknown. Abundant evidence worldwide for multiple cycles of ice advance and recession is suggestive of much more dynamic mass balance changes than previously predicted. Sedimentological analysis enables an understanding of the changing ice margin position to be gained through time, in some cases allowing it to be mapped. Where the maximum extent of ice advance varies within a given study area, predictions can also be made on the morphology of the ice margin, and the underlying controls on this morphology e.g. basin configuration. This can be illustrated using examples from the Neoproterozoic Kingston Peak Formation in the Death Valley region of western USA. Throughout the Sperry Wash, northern Kingston Range and southern Kingston Range study sites the successions show evidence of multiple cycles of ice advance and retreat, but the extent of maximum ice advance is extremely variable, reaching ice-contact conditions at Sperry Wash but only ice-proximal settings in the most distal southern Kingston Range. The overall advance is also much more pronounced at Sperry Wash, from ice-distal to ice-contact settings, as compared to ice-distal to ice-proximal settings in the southern Kingston Range. Therefore, the position of the ice margin can be located at the Sperry Wash study site, where the more pronounced progradation is used to argue for topographically constrained ice, feeding the unconstrained shelf through the northern into the southern Kingston Range. This raises the question as to whether Neoproterozoic ice masses could be defined as topographically constrained ice caps, or larger ice sheets feeding topographically constrained outlet glaciers.

  4. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  5. How We Can Constrain Aerosol Type Globally

    Science.gov (United States)

    Kahn, Ralph

    2016-01-01

    In addition to aerosol number concentration, aerosol size and composition are essential attributes needed to adequately represent aerosol-cloud interactions (ACI) in models. As the nature of ACI varies enormously with environmental conditions, global-scale constraints on particle properties are indicated. And although advanced satellite remote-sensing instruments can provide categorical aerosol-type classification globally, detailed particle microphysical properties are unobtainable from space with currently available or planned technologies. For the foreseeable future, only in situ measurements can constrain particle properties at the level-of-detail required for ACI, as well as to reduce uncertainties in regional-to-global-scale direct aerosol radiative forcing (DARF). The limitation of in situ measurements for this application is sampling. However, there is a simplifying factor: for a given aerosol source, in a given season, particle microphysical properties tend to be repeatable, even if the amount varies from day-to-day and year-to-year, because the physical nature of the particles is determined primarily by the regional environment. So, if the PDFs of particle properties from major aerosol sources can be adequately characterized, they can be used to add the missing microphysical detail the better sampled satellite aerosol-type maps. This calls for Systematic Aircraft Measurements to Characterize Aerosol Air Masses (SAM-CAAM). We are defining a relatively modest and readily deployable, operational aircraft payload capable of measuring key aerosol absorption, scattering, and chemical properties in situ, and a program for characterizing statistically these properties for the major aerosol air mass types, at a level-of-detail unobtainable from space. It is aimed at: (1) enhancing satellite aerosol-type retrieval products with better aerosol climatology assumptions, and (2) improving the translation between satellite-retrieved aerosol optical properties and

  6. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake...

  7. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  8. Post-maximum near infrared spectra of SN 2014J: A search for interaction signatures

    CERN Document Server

    Sand, D J; Banerjee, D P K; Marion, G H; Diamond, T R; Joshi, V; Parrent, J T; Phillips, M M; Stritzinger, M D; Venkataraman, V

    2016-01-01

    We present near infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The seventeen NIR spectra span epochs from +15.3 to +92.5 days after $B$-band maximum light, while the $JHK_s$ photometry include epochs from $-$10 to +71 days. This data is used to constrain the progenitor system of SN 2014J utilizing the Pa$\\beta$ line, following recent suggestions that this phase period and the NIR in particular are excellent for constraining the amount of swept up hydrogen-rich material associated with a non-degenerate companion star. We find no evidence for Pa$\\beta$ emission lines in our post-maximum spectra, with a rough hydrogen mass limit of $\\lesssim$0.1 $M_{\\odot}$, which is consistent with previous limits in SN 2014J from late-time optical spectra of the H$\\alpha$ line. Nonetheless, the growing dataset of high-quality NIR spectra holds the promise of very useful hydrogen constraints.

  9. A connection theory for a nonlinear differential constrained system

    Institute of Scientific and Technical Information of China (English)

    许志新; 郭永新; 吴炜

    2002-01-01

    An Ehresmann connection on a constrained state bundle defined by nonlinear differential constraints is constructed for nonlinear nonholonomic systems. A set of differential constraints is integrable if and only if the curvature of the Ehresmann connection vanishes. Based on a geometric interpretation of d-δ commutation relations in constrained dynamics given in this paper, the complete integrability conditions for the differential constraints are proven to be equivalent to the three requirements upon the conditional variation in mechanics: (1) the variations belong to the constrained manifold; (2) the time derivative commutes with variational operator; (3) the variations satisfy the Chetaev's conditions.

  10. "Resource-constrained innovation" : Classification and implications for multinational firms

    OpenAIRE

    Zeschky, Marco; Winterhalter, Stephan; Gassmann, Oliver

    2014-01-01

    Recent developments show that it is no longer enough to serve high-margin markets with high-tech products but that firms must also be able to serve resource-constrained markets with products that deliver high value at ultra-low costs. Resource-constrained consumers are often found in the lower part of the economic pyramid and do not only exist in emerging but also in developed markets. This article discusses the different types of resource-constrained innovations-cost, good-enough, frugal, an...

  11. Processing Constrained K Closest Pairs Query in Spatial Databases

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaofeng; LIU Yunsheng; XIAO Yingyuan

    2006-01-01

    In this paper, constrained K closest pairs query is introduced, which retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R-trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them,two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.

  12. Characterizing Local Optima for Maximum Parsimony.

    Science.gov (United States)

    Urheim, Ellen; Ford, Eric; St John, Katherine

    2016-05-01

    Finding the best phylogenetic tree under the maximum parsimony optimality criterion is computationally difficult. We quantify the occurrence of such optima for well-behaved sets of data. When nearest neighbor interchange operations are used, multiple local optima can occur even for "perfect" sequence data, which results in hill-climbing searches that never reach a global optimum. In contrast, we show that when neighbors are defined via the subtree prune and regraft metric, there is a single local optimum for perfect sequence data, and thus, every such search finds a global optimum quickly. We further characterize conditions for which sequences simulated under the Cavender-Farris-Neyman and Jukes-Cantor models of evolution yield well-behaved search spaces. PMID:27234257

  13. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  14. Diffusivity Maximum in a Reentrant Nematic Phase

    Directory of Open Access Journals (Sweden)

    Martin Schoen

    2012-06-01

    Full Text Available We report molecular dynamics simulations of confined liquid crystals using the Gay–Berne–Kihara model. Upon isobaric cooling, the standard sequence of isotropic–nematic–smectic A phase transitions is found. Upon further cooling a reentrant nematic phase occurs. We investigate the temperature dependence of the self-diffusion coefficient of the fluid in the nematic, smectic and reentrant nematic phases. We find a maximum in diffusivity upon isobaric cooling. Diffusion increases dramatically in the reentrant phase due to the high orientational molecular order. As the temperature is lowered, the diffusion coefficient follows an Arrhenius behavior. The activation energy of the reentrant phase is found in reasonable agreement with the reported experimental data. We discuss how repulsive interactions may be the underlying mechanism that could explain the occurrence of reentrant nematic behavior for polar and non-polar molecules.

  15. Quantum optimization and maximum clique problems

    Science.gov (United States)

    Yatsenko, Vitaliy A.; Pardalos, Panos M.; Chiarini, Bruno H.

    2004-08-01

    This paper describes a new approach to global optimization and control uses geometric methods and modern quantum mathematics. Polynomial extremal problems (PEP) are considered. PEP constitute one of the most important subclasses of nonlinear programming models. Their distinctive feature is that an objective function and constraints can be expressed by polynomial functions in one or several variables. A general approach to optimization based on quantum holonomic computing algorithms and instanton mechanism. An optimization method based on geometric Lie - algebraic structures on Grassmann manifolds and related with Lax type flows is proposed. Making use of the differential geometric techniques it is shown that associated holonomy groups properly realizing quantum computation can be effectively found concerning polynomial problems. Two examples demonstrating calculation aspects of holonomic quantum computer and maximum clique problems in very large graphs, are considered in detail.

  16. Maximum practical efficiency of helium temperature refrigerators

    International Nuclear Information System (INIS)

    An ideal refrigerator using a perfect gas working fluid is defined which gives the efficiency of a refrigerator as a function of compressor and expander efficiency, heat exchanger temperature difference, and heat exchanger pressure drop. Although not suited to detailed hardware design, this approach clearly relates the overall cycle efficiency to component efficiencies. In contrast, computer studies of specific cycles using real fluid properties are usually such that the details tend to overshadow major trends. The results of the study show that in an efficient cycle the major losses are in the compressor and the cold end expansion device. For current compressor and expander efficiencies the maximum practical helium temperature refrigerator efficiency is about 37% of Carnot. (author)

  17. Video segmentation using Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei

    2005-01-01

    Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.

  18. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  19. Co-Clustering under the Maximum Norm

    Directory of Open Access Journals (Sweden)

    Laurent Bulteau

    2016-02-01

    Full Text Available Co-clustering, that is partitioning a numerical matrix into “homogeneous” submatrices, has many applications ranging from bioinformatics to election analysis. Many interesting variants of co-clustering are NP-hard. We focus on the basic variant of co-clustering where the homogeneity of a submatrix is defined in terms of minimizing the maximum distance between two entries. In this context, we spot several NP-hard, as well as a number of relevant polynomial-time solvable special cases, thus charting the border of tractability for this challenging data clustering problem. For instance, we provide polynomial-time solvability when having to partition the rows and columns into two subsets each (meaning that one obtains four submatrices. When partitioning rows and columns into three subsets each, however, we encounter NP-hardness, even for input matrices containing only values from {0, 1, 2}.

  20. On Using Unsatisfiability for Solving Maximum Satisfiability

    CERN Document Server

    Marques-Silva, Joao

    2007-01-01

    Maximum Satisfiability (MaxSAT) is a well-known optimization pro- blem, with several practical applications. The most widely known MAXS AT algorithms are ineffective at solving hard problems instances from practical application domains. Recent work proposed using efficient Boolean Satisfiability (SAT) solvers for solving the MaxSAT problem, based on identifying and eliminating unsatisfiable subformulas. However, these algorithms do not scale in practice. This paper analyzes existing MaxSAT algorithms based on unsatisfiable subformula identification. Moreover, the paper proposes a number of key optimizations to these MaxSAT algorithms and a new alternative algorithm. The proposed optimizations and the new algorithm provide significant performance improvements on MaxSAT instances from practical applications. Moreover, the efficiency of the new generation of unsatisfiability-based MaxSAT solvers becomes effectively indexed to the ability of modern SAT solvers to proving unsatisfiability and identifying unsatisfi...

  1. Approximation for Maximum Surjective Constraint Satisfaction Problems

    CERN Document Server

    Bach, Walter

    2011-01-01

    Maximum surjective constraint satisfaction problems (Max-Sur-CSPs) are computational problems where we are given a set of variables denoting values from a finite domain B and a set of constraints on the variables. A solution to such a problem is a surjective mapping from the set of variables to B such that the number of satisfied constraints is maximized. We study the approximation performance that can be acccchieved by algorithms for these problems, mainly by investigating their relation with Max-CSPs (which are the corresponding problems without the surjectivity requirement). Our work gives a complexity dichotomy for Max-Sur-CSP(B) between PTAS and APX-complete, under the assumption that there is a complexity dichotomy for Max-CSP(B) between PO and APX-complete, which has already been proved on the Boolean domain and 3-element domains.

  2. Spacecraft Maximum Allowable Concentrations for Airborne Contaminants

    Science.gov (United States)

    James, John T.

    2008-01-01

    The enclosed table lists official spacecraft maximum allowable concentrations (SMACs), which are guideline values set by the NASA/JSC Toxicology Group in cooperation with the National Research Council Committee on Toxicology (NRCCOT). These values should not be used for situations other than human space flight without careful consideration of the criteria used to set each value. The SMACs take into account a number of unique factors such as the effect of space-flight stress on human physiology, the uniform good health of the astronauts, and the absence of pregnant or very young individuals. Documentation of the values is given in a 5 volume series of books entitled "Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants" published by the National Academy Press, Washington, D.C. These books can be viewed electronically at http://books.nap.edu/openbook.php?record_id=9786&page=3. Short-term (1 and 24 hour) SMACs are set to manage accidental releases aboard a spacecraft and permit risk of minor, reversible effects such as mild mucosal irritation. In contrast, the long-term SMACs are set to fully protect healthy crewmembers from adverse effects resulting from continuous exposure to specific air pollutants for up to 1000 days. Crewmembers with allergies or unusual sensitivity to trace pollutants may not be afforded complete protection, even when long-term SMACs are not exceeded. Crewmember exposures involve a mixture of contaminants, each at a specific concentration (C(sub n)). These contaminants could interact to elicit symptoms of toxicity even though individual contaminants do not exceed their respective SMACs. The air quality is considered acceptable when the toxicity index (T(sub grp)) for each toxicological group of compounds is less than 1, where T(sub grp), is calculated as follows: T(sub grp) = C(sub 1)/SMAC(sub 1) + C(sub 2/SMAC(sub 2) + ...+C(sub n)/SMAC(sub n).

  3. Maximum entropy imaging of radio astrophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Holdaway, M.A.

    1990-01-01

    Imaging and deconvolution of linear polarization P in radio synthesis has generally been accomplished through the same means as imaging total intensity 1, namely, by Fourier inversion followed by application of the CLEAN algorithm (Hogbom, 1974). CLEAN images each Stokes parameter I, Q, U, and V independently, or at best, Q and U are imaged simultaneously with a complex CLEAN. In very long baseline interferometry (VLBI) and the sources which can be investigated by VLBI, poor (u,v) coverage, lower SNR in the polarization data, high intrinsic degrees of polarization in the observed sources, and knotty source structure can result in nonphysical I and P images in which features are imaged to be more than 100% polarized. The maximum entropy method (MEM) is flexible enough to permit simultaneous imaging of I and P to ensure the image is everywhere less than 100% polarized. Here, the algorithm of Cornwell and Evans (1985) is applied to imaging I and P on milliarcsecond scales. The success of MEM in imaging VLBI data is gauged by simulations on test data. Presented here are the first I and P radio images deconvolved simultaneously by MEM and generated from reat data. The jet in quasar 3C273 is found to be polarized with the electric field perpendicular to the jet out to 70 milliarseconds (mas). The jet in quasar 1928 + 738 is found to be polarized out to 75 mas with the electric field perpendicular to the jet. In the quasar 3C345, one component is found to be {approximately}50% polarized, close to the maximum for synchrotron radiation and indicating a highly ordered magnetic field. Polarized features are associated with changes in jet direction in 1928 {plus} 738 and 3C345.

  4. Maximum entropy imaging of radio astrophysical data

    International Nuclear Information System (INIS)

    Imaging and deconvolution of linear polarization P in radio synthesis has generally been accomplished through the same means as imaging total intensity 1, namely, by Fourier inversion followed by application of the CLEAN algorithm (Hogbom, 1974). CLEAN images each Stokes parameter I, Q, U, and V independently, or at best, Q and U are imaged simultaneously with a complex CLEAN. In very long baseline interferometry (VLBI) and the sources which can be investigated by VLBI, poor (u,v) coverage, lower SNR in the polarization data, high intrinsic degrees of polarization in the observed sources, and knotty source structure can result in nonphysical I and P images in which features are imaged to be more than 100% polarized. The maximum entropy method (MEM) is flexible enough to permit simultaneous imaging of I and P to ensure the image is everywhere less than 100% polarized. Here, the algorithm of Cornwell and Evans (1985) is applied to imaging I and P on milliarcsecond scales. The success of MEM in imaging VLBI data is gauged by simulations on test data. Presented here are the first I and P radio images deconvolved simultaneously by MEM and generated from reat data. The jet in quasar 3C273 is found to be polarized with the electric field perpendicular to the jet out to 70 milliarseconds (mas). The jet in quasar 1928 + 738 is found to be polarized out to 75 mas with the electric field perpendicular to the jet. In the quasar 3C345, one component is found to be ∼50% polarized, close to the maximum for synchrotron radiation and indicating a highly ordered magnetic field. Polarized features are associated with changes in jet direction in 1928 + 738 and 3C345

  5. Network constrained wind integration on Vancouver Island

    International Nuclear Information System (INIS)

    The aim of this study is to determine the costs and carbon emissions associated with operating a hydro-dominated electricity generation system (Vancouver Island, Canada) with varying degrees of wind penetration. The focus is to match the wind resource, system demand and abilities of extant generating facilities on a temporal basis, resulting in an operating schedule that minimizes system cost over a given period. This is performed by taking the perspective of a social planner who desires to find the lowest-cost mix of new and existing generation facilities. Unlike other studies, this analysis considers variable efficiency for thermal and hydro-generators, resulting in a fuel cost that varies with respect to generator part load. Since this study and others have shown that wind power may induce a large variance on existing dispatchable generators, forcing more frequent operation at reduced part load, inclusion of increased fuel cost at part load is important when investigating wind integration as it can significantly reduce the economic benefits of utilizing low-cost wind. Results indicate that the introduction of wind power may reduce system operating costs, but this depends heavily on whether the capital cost of the wind farm is considered. For the Vancouver Island mix with its large hydro-component, operating cost was reduced by a maximum of 15% at a wind penetration of 50%, with a negligible reduction in operating cost when the wind farm capital cost was included

  6. Constrained reconstructions for 4D intervention guidance

    Science.gov (United States)

    Kuntz, J.; Flach, B.; Kueres, R.; Semmler, W.; Kachelrieß, M.; Bartling, S.

    2013-05-01

    Image-guided interventions are an increasingly important part of clinical minimally invasive procedures. However, up to now they cannot be performed under 4D (3D + time) guidance due to the exceedingly high x-ray dose. In this work we investigate the applicability of compressed sensing reconstructions for highly undersampled CT datasets combined with the incorporation of prior images in order to yield low dose 4D intervention guidance. We present a new reconstruction scheme prior image dynamic interventional CT (PrIDICT) that accounts for specific image features in intervention guidance and compare it to PICCS and ASD-POCS. The optimal parameters for the dose per projection and the numbers of projections per reconstruction are determined in phantom simulations and measurements. In vivo experiments in six pigs are performed in a cone-beam CT; measured doses are compared to current gold-standard intervention guidance represented by a clinical fluoroscopy system. Phantom studies show maximum image quality for identical overall doses in the range of 14 to 21 projections per reconstruction. In vivo studies reveal that interventional materials can be followed in 4D visualization and that PrIDICT, compared to PICCS and ASD-POCS, shows superior reconstruction results and fewer artifacts in the periphery with dose in the order of biplane fluoroscopy. These results suggest that 4D intervention guidance can be realized with today’s flat detector and gantry systems using the herein presented reconstruction scheme.

  7. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  8. Carbon-constrained scenarios. Final report

    International Nuclear Information System (INIS)

    This report provides the results of the study entitled 'Carbon-Constrained Scenarios' that was funded by FONDDRI from 2004 to 2008. The study was achieved in four steps: (i) Investigating the stakes of a strong carbon constraint for the industries participating in the study, not only looking at the internal decarbonization potential of each industry but also exploring the potential shifts of the demand for industrial products. (ii) Developing an hybrid modelling platform based on a tight dialog between the sectoral energy model POLES and the macro-economic model IMACLIM-R, in order to achieve a consistent assessment of the consequences of an economy-wide carbon constraint on energy-intensive industrial sectors, while taking into account technical constraints, barriers to the deployment of new technologies and general economic equilibrium effects. (iii) Producing several scenarios up to 2050 with different sets of hypotheses concerning the driving factors for emissions - in particular the development styles. (iv) Establishing an iterative dialog between researchers and industry representatives on the results of the scenarios so as to improve them, but also to facilitate the understanding and the appropriate use of these results by the industrial partners. This report provides the results of the different scenarios computed in the course of the project. It is a partial synthesis of the work that has been accomplished and of the numerous exchanges that this study has induced between modellers and stakeholders. The first part was written in April 2007 and describes the first reference scenario and the first mitigation scenario designed to achieve stabilization at 450 ppm CO2 at the end of the 21. century. This scenario has been called 'mimetic' because it has been build on the assumption that the ambitious climate policy would coexist with a progressive convergence of development paths toward the current paradigm of industrialized countries: urban sprawl, general

  9. FXR agonist activity of conformationally constrained analogs of GW 4064

    Energy Technology Data Exchange (ETDEWEB)

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y.; Caldwell, Richard D.; Caravella, Justin A.; Chen, Lihong; Creech, Katrina L.; Deaton, David N.; Madauss, Kevin P.; Marr, Harry B.; McFadyen, Robert B.; Miller, Aaron B.; Navas, III, Frank; Parks, Derek J.; Spearing, Paul K.; Todd, Dan; Williams, Shawn P.; Wisely, G. Bruce; (GSKNC)

    2010-09-27

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  10. Free and constrained symplectic integrators for numerical general relativity

    CERN Document Server

    Richter, Ronny

    2008-01-01

    We consider symplectic time integrators in numerical General Relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively 1+1-dimensional versions of Einstein's equations, that allow to investigate a perturbed Minkowski problem and the Schwarzschild space-time. We compare symplectic and non-symplectic integrators for free evolution, showing very diffe...

  11. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  12. Time-dependent constrained Hamiltonian systems and Dirac brackets

    International Nuclear Information System (INIS)

    In this paper the canonical Dirac formalism for time-dependent constrained Hamiltonian systems is globalized. A time-dependent Dirac bracket which reduces to the usual one for time-independent systems is introduced. (author)

  13. Bayesian item selection in constrained adaptive testing using shadow tests

    OpenAIRE

    Bernard P. Veldkamp

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approac...

  14. Kaon photoproduction on the nucleon with constrained parameters

    CERN Document Server

    Nelson, R

    2009-01-01

    The new experimental data of kaon photoproduction on the nucleon, gamma p -> K+ Lambda, have been analyzed by means of a multipoles model. Different from the previous models, in this analysis the resonance decay widths are constrained to the values given by the Particle Data Group (PDG). The result indicates that constraining these parameters to the PDG values could dramatically change the conclusion of the important resonances in this reaction found in the previous studies.

  15. Refined Algebraic Quantization of Constrained Systems with Structure Functions

    OpenAIRE

    Shvedov, Oleg Yu.

    2001-01-01

    The method of refined algebraic quantization of constrained systems which is based on modification of the inner product of the theory rather than on imposing constraints on the physical states is generalized to the case of constrained systems with structure functions and open gauge algebras. A new prescription for inner product for the open-algebra systems is suggested. It is illustrated on a simple example. The correspondence between refined algebraic and BRST-BFV quantizations is investigat...

  16. Husain-Kuchar model as a constrained BF theory

    CERN Document Server

    Montesinos, Merced

    2008-01-01

    We write the Husain-Kuchar model as a constrained BF theory by giving an action principle for it. The action principle turns out to be very close to Plebanski action for general relativity; the difference with respect to gravity lies in the fact that the condition on the Lagrange multipliers of Plebanski's formulation is not present anymore in the constrained BF action for the Husain-Kuchar model reported in this paper.

  17. Projector fields in the formulation of constrained dynamics

    International Nuclear Information System (INIS)

    With aid of a configuration-velocity - dependent projector field, constructed with the constraint conditions, the fundamentals of a geometric model for classical constrained Lagrangian dynamics are established. The projector behaves as a singular metric field. Only constraints which are homogeneous of the first degree in the velocities, are considered. A generalized variational Hamiltonian principle is stablished as a function of the projector field. The formalism developed can be the starting point for the construction of Hamiltonian constrained formalis. (Author)

  18. Constraining the volatile fraction of planets from transit observations

    OpenAIRE

    Alibert, Yann

    2016-01-01

    The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount...

  19. Constrained minimization of smooth functions using a genetic algorithm

    Science.gov (United States)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  20. Constraining the Initial Phase in Water-Fat Separation

    OpenAIRE

    Bydder, Mark; Yokoo, Takeshi; Yu, Huanzhou; Carl, Michael; Reeder, Scott B.; Sirlin, Claude B.

    2010-01-01

    An algorithm is described for use in chemical shift based water-fat separation to constrain the phase of both species to be equal at an echo-time of zero. This constraint is physically reasonable since the initial phase should be a property of the excitation pulse and receiver coil only. The advantages of phase-constrained water-fat separation, namely improved noise performance and/or reduced data requirements (fewer echos), are demonstrated in simulations and experiments.

  1. Augmented Lagrangian Method for Constrained Nuclear Density Functional Theory

    OpenAIRE

    Staszczak, A.; Stoitsov, M.; Baran, A.; W. Nazarewicz

    2010-01-01

    The augmented Lagrangiam method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with re...

  2. Optimal preliminary propeller design using nonlinear constrained mathematical programming technique

    OpenAIRE

    Radojčić, D.

    1985-01-01

    Presented is a nonlinear constrained optimization technique applied to optimal propeller design at the preliminary design stage. The optimization method used is Sequential Unconstrained Minimization Technique - SUMT, which can treat equality and inequality, or only inequality constraints. Both approaches are shown. Application is given for Wageningen B-series and Gawn series propellers. The problem is solved on an Apple II microcomputer. One of the advantages of treating the constrained ...

  3. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  4. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  5. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-01-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we adapt model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature

  6. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-06-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a~significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land

  7. Theoretical Estimate of Maximum Possible Nuclear Explosion

    Science.gov (United States)

    Bethe, H. A.

    1950-01-31

    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  8. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  9. Mammographic image restoration using maximum entropy deconvolution

    CERN Document Server

    Jannetta, A; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution fe...

  10. Laser Transmission Holograms Maximum Permissible Exposure

    Science.gov (United States)

    Dawson, Paula; Wilksch, P. A.

    2010-05-01

    The laser illumination of holograms for public display is governed by international standard IEC 60825-3, to which the Australian Standard AS/NZS 2211.3 conforms. These standards do not accommodate vital mitigating factors of hologram replay that impinge on the level of laser power i.e. angle of the replay reference beam, the divergence of the beam, the distance of the viewer from the holographic plate and the diffraction efficiency of the hologram plate itself. Such factors indicate that a more meaningful calculation of the radiation level would be obtained from direct measurement at the position of the viewer of the hologram. The purpose of this paper is to demonstrate the importance of these factors in realistically determining the maximum permissible exposure (MPE) for viewers of large format holograms. Materials and Methods: A comparison is made between measurements based on the power or energy that can pass through a fully open pupil for Class 3B and Class 4 lasers (1. medical copper bromide laser, 2. diode laser, and 3. argon continuous wave laser), and the actual power levels when the measurement is taken from the beholder's point of view. Discussion and conclusion: these results indicate a need to review current standards.

  11. Maximum hailstone size: Relationship with meteorological variables

    Science.gov (United States)

    Palencia, Covadonga; Giaiotti, Dario; Stel, Fulvio; Castro, Amaya; Fraile, Roberto

    2010-05-01

    The damage caused to property by hail mainly depends on the size of the hailstones. This paper explores the possibility of forecasting the maximum hailstone size registered on a particular day using sounding data. The data employed for the study are those provided by hail events registered over an 11-year period in the hailpad network in the plain of Friuli-Venezia-Giulia, in Italy. As for the description of the atmosphere, the most common weather variables (stability indices, layer thickness, kinetic variables, temperatures, etc.) were obtained from the daily sounding carried out at Udine, a city almost in the middle of the Friulian plain. Only the days with sounding data and with dents on the hailpads were considered for the study: a minimum of 10 dents per plate was established as the lower threshold. The final sample that fulfilled these conditions included 313 days. A detailed study was carried out on the relationship between the weather variables before the hail event and daily data on hail size. The results show that the variable that relates best to hail size is the drop in surface pressure in the 12 h immediately prior to the hail event, as well as the lifted index. Principal component analysis was applied to the weather variables. The first eight principal components were used together with the drop in pressure to establish a linear forecast model. The result improves considerably when the smaller hailstones are not considered, with sizes smaller than 10 or 15 mm.

  12. GOOD GOVERNANCE, SUSTAINABLE DEVELOPMENT & MAXIMUM SOCIAL ADVANTAGE

    Directory of Open Access Journals (Sweden)

    Surinder Singh Parihar

    2012-10-01

    Full Text Available Good governance encompasses full respect of human rights, the rule of law, effective participation,multiactor partnership, political pluralism, transparent & accountable processes and institutions, an efficient and effective public sector, legitimacy ,access to knowledge information and education, political empowerment of people, equity ,sustainability and attitude and value that foster responsibility, solidarity and tolerance and all areas which effect political, social ,and cultural and religious life.The good governance is the process where by the public institutions conduct public affairs ,manage public resources and guarantee the realisation of human rights in a manner essentially free of abuse and corruption and with due regard for the rule of the law. The true test of the good governance is the degree to which it delivers on promise of human rights, civil, cultural, political and social rights. The key questions are- are the the institutions of governance guaranteeing the right to health, right to adequate housing, adequate food, safe drinking water, quality education ,access to elecrticity,fair justice and personal security?Sustainable development is the development that sustains the economy. It is the developmement without destruction.It is the development thet seeks the conservation of the natural capital stock and of environmental resources .It meets the needs of the present generations without compromising the needs of the future generations. But to ensure sustainable development the governance needs to be efficient.The good governace & sustanaible development should be the test for the the Principal of Maximum Social Advantage

  13. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the...

  14. Evidence that the maximum electron energy in hotspots of FR II galaxies is not determined by synchrotron cooling

    CERN Document Server

    Araudo, Anabella T; Crilly, Aidan; Blundell, Katherine M

    2016-01-01

    It has been suggested that relativistic shocks in extragalactic sources may accelerate the highest energy cosmic rays. The maximum energy to which cosmic rays can be accelerated depends on the structure of magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a PeV. We study the hotspots of powerful radiogalaxies, where electrons accelerated at the termination shock emit synchrotron radiation. The turnover of the synchrotron spectrum is typically observed between infrared and optical frequencies, indicating that the maximum energy of non-thermal electrons accelerated at the shock is < TeV for a canonical magnetic field of ~100 micro Gauss. Based on theoretical considerations we show that this maximum energy cannot be constrained by synchrotron losses as usually assumed, unless the jet density is unreasonably large and most of the jet upstream energy goes to non-thermal particles. We test ...

  15. The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.

  16. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  17. Constrained quantities in uncertainty quantification. Ambiguity and tips to follow

    International Nuclear Information System (INIS)

    The nuclear community relies heavily on computer codes and numerical tools. The results of such computations can only be trusted if they are augmented by proper sensitivity and uncertainty (S and U) studies. This paper presents some aspects of S and U analysis when constrained quantities are involved, such as the fission spectrum or the isotopic distribution of elements. A consistent theory is given for the derivation and interpretation of constrained sensitivities as well as the corresponding covariance matrix normalization procedures. It is shown that if the covariance matrix violates the “generic zero column and row sum” condition, normalizing it is equivalent to constraining the sensitivities, but since both can be done in many ways different sensitivity coefficients and uncertainties can be derived. This makes results ambiguous, underlining the need for proper covariance data. It is also highlighted that the use of constrained sensitivity coefficients derived with a constraining procedure that is not idempotent can lead to biased results in uncertainty propagation. The presented theory is demonstrated on an analytical case and a numerical example involving the fission spectrum, both confirming the main conclusions of this research. (author)

  18. Estimation of a maximum Lu diffusion rate in a natural eclogite garnet

    International Nuclear Information System (INIS)

    Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of ∼ 450-600 oC has been estimated based on pseudo-section calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D0 = 5.7*10-6 m2/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (∼ 600 oC) = 4.0*10-22 m2/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, Tc, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 oC, assuming a rapid initial exhumation rate of 50 oC/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 oC must be interpreted in terms of prograde metamorphism. (author)

  19. Which data provide the most useful information about maximum earthquake magnitudes?

    Science.gov (United States)

    Zoeller, G.; Holschneider, M.

    2013-12-01

    In recent publications, it has been shown that earthquake catalogs are useful to estimate the maximum expected earthquake magnitude in a future time horizon Tf. However, earthquake catalogs alone do not allow to estimate the maximum possible magnitude M (Tf = ∞) in a study area. Therefore, we focus on the question, which data might be helpful to constrain M. Assuming a doubly-truncated Gutenberg-Richter law and independent events, optimal estimates of M depend solely on the largest observed magnitude μ regardless of all the other details in the catalog. For other models of the frequency-magnitude relation, this results holds in approximation. We show that the maximum observed magnitude μT in a known time interval T in the past provides provides the most powerful information on M in terms of the smallest confidence intervals. However, if high levels of confidence are required, the upper bound of the confidence interval may diverge. Geological or tectonic data, e.g. strain rates, might be helpful, if μT is not available; but these quantities can only serve as proxies for μT and will always lead to a higher degree of uncertainty and, therefore, to larger confidence intervals of M.

  20. Constraining the brachial plexus does not compromise regional control in oropharyngeal carcinoma

    International Nuclear Information System (INIS)

    Accumulating evidence suggests that brachial plexopathy following head and neck cancer radiotherapy may be underreported and that this toxicity is associated with a dose–response. Our purpose was to determine whether the dose to the brachial plexus (BP) can be constrained, without compromising regional control. The radiation plans of 324 patients with oropharyngeal carcinoma (OPC) treated with intensity-modulated radiation therapy (IMRT) were reviewed. We identified 42 patients (13%) with gross nodal disease <1 cm from the BP. Normal tissue constraints included a maximum dose of 66 Gy and a D05 of 60 Gy for the BP. These criteria took precedence over planning target volume (PTV) coverage of nodal disease near the BP. There was only one regional failure in the vicinity of the BP, salvaged with neck dissection (ND) and regional re-irradiation. There have been no reported episodes of brachial plexopathy to date. In combined-modality therapy, including ND as salvage, regional control did not appear to be compromised by constraining the dose to the BP. This approach may improve the therapeutic ratio by reducing the long-term risk of brachial plexopathy

  1. Level repulsion in constrained Gaussian random-matrix ensembles

    International Nuclear Information System (INIS)

    Introducing sets of constraints, we define new classes of random-matrix ensembles, the constrained Gaussian unitary (CGUE) and the deformed Gaussian unitary (DGUE) ensembles. The latter interpolate between the GUE and the CGUE. We derive a sufficient condition for GUE-type level repulsion to persist in the presence of constraints. For special classes of constraints, we extend this approach to the orthogonal and to the symplectic ensembles. A generalized Fourier theorem relates the spectral properties of the constraining ensembles with those of the constrained ones. We find that in the DGUEs, level repulsion always prevails at a sufficiently short distance and may be lifted only in the limit of strictly enforced constraints

  2. Level repulsion in constrained Gaussian random-matrix ensembles

    Science.gov (United States)

    Papenbrock, T.; Pluhar, Z.; Weidenmüller, H. A.

    2006-08-01

    Introducing sets of constraints, we define new classes of random-matrix ensembles, the constrained Gaussian unitary (CGUE) and the deformed Gaussian unitary (DGUE) ensembles. The latter interpolate between the GUE and the CGUE. We derive a sufficient condition for GUE-type level repulsion to persist in the presence of constraints. For special classes of constraints, we extend this approach to the orthogonal and to the symplectic ensembles. A generalized Fourier theorem relates the spectral properties of the constraining ensembles with those of the constrained ones. We find that in the DGUEs, level repulsion always prevails at a sufficiently short distance and may be lifted only in the limit of strictly enforced constraints.

  3. A chiral soliton model constrained by gA/gV

    International Nuclear Information System (INIS)

    We present one example of a smooth chiral confinement model of the nucleon constrained (within a mean-field theory) by the measured gA/gV of the neutron. The resulting confining scalar potential for the quarks inside the nucleon has a maximum in the surface and approaches its asymptotic value from above. Low-energy properties of the nucleon (three quarks in their ground state) are not spoiled by this peculiar surface behaviour. The 'helicity argument' (only spin-carrying fields inside the nucleon contribute to gA/gV) we employed here further, sheds new light on the modelling of the hadrons in terms of hybrid skyrmions and on the description of the Nπ decay mode of excited baryon states

  4. Interdependence of climate, soil, and vegetation as constrained by the Budyko curve

    Science.gov (United States)

    Gentine, Pierre; D'Odorico, Paolo; Lintner, Benjamin R.; Sivandran, Gajan; Salvucci, Guido

    2012-10-01

    The Budyko curve is an empirical relation among evapotranspiration, potential evapotranspiration and precipitation observed across a variety of landscapes and biomes around the world. Using data from more than three hundred catchments and a simple water balance model, the Budyko curve is inverted to explore the ecohydrological controls of the soil water balance. Comparing the results across catchments reveals that aboveground transpiration efficiency and belowground rooting structure have adapted to the dryness index and the phase lag between peak seasonal radiation and precipitation. The vertical and/or lateral extent of the rooting zone exhibits a maximum in semi-arid catchments or when peak radiation and precipitation are out of phase. This demonstrates plant strategies in Mediterranean climates in order to cope with water stress: the deeper rooting structure buffers the phase difference between precipitation and radiation. Results from this study can be used to constrain land-surface parameterizations in ungauged basins or general circulation models.

  5. Ultrafine-grained structure development and deformation behavior of aluminium processed by constrained groove pressing

    International Nuclear Information System (INIS)

    The severe plastic deformation method known as constrained groove pressing was used to produce ultrafine-grained microstructure in recrystallized aluminium (99.99%) at room temperature. The impact of repeated groove pressing, upon microstructure refinement was investigated by transmission electron microscopy of thin foils. Changes in mechanical properties measured by tensile and by hardness tests were related to microstructure development. The formation of banded subgrain microstructure with dislocation cells, and appearance of polygonal subgrains was a common feature observed in deformed plate subjected to the first pass. The substantial impact of strain upon strength increase was observed after the first pressings. The yield stress and ultimate tensile strength reached a maximum after four passes. A loss of ductility was observed in all processed plates. Hardness values measured in different areas of the deformed plates indicated heterogeneous strain distribution even after large degrees of straining

  6. Performance Comparison of Constrained Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Soudeh Babaeizadeh

    2015-06-01

    Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.

  7. Constraining the Axion Portal with B -> K l+ l-

    OpenAIRE

    Freytsis, Marat; Ligeti, Zoltan; Thaler, Jesse

    2009-01-01

    We investigate the bounds on axionlike states from flavor-changing neutral current b->s decays, assuming the axion couples to the standard model through mixing with the Higgs sector. Such GeV-scale axions have received renewed attention in connection with observed cosmic ray excesses. We find that existing B->K l+ l- data impose stringent bounds on the axion decay constant in the multi-TeV range, relevant for constraining the "axion portal" model of dark matter. Such bounds also constrain lig...

  8. Constraining the Axion Portal with B -> K l+ l-

    CERN Document Server

    Freytsis, Marat; Thaler, Jesse

    2009-01-01

    We investigate the bounds on axion-like states from flavor-changing neutral current b->s decays, assuming the axion couples to the standard model through mixing with the Higgs sector. Such GeV-scale axions have received renewed attention in connection with observed cosmic ray excesses. We find that existing B->K l+ l- data impose stringent bounds on the axion decay constant in the multi-TeV range, relevant for constraining the "axion portal" model of dark matter. Such bounds also constrain light Higgs scenarios in the NMSSM. These bounds can be improved by dedicated searches in B-factory data and at LHCb.

  9. Asymmetric biclustering with constrained von Mises-Fisher models

    Science.gov (United States)

    Watanabe, Kazuho; Wu, Hsiang-Yun; Takahashi, Shigeo; Fujishiro, Issei

    2016-03-01

    As a probability distribution on the high-dimensional sphere, the von Mises-Fisher (vMF) distribution is widely used for directional statistics and data analysis methods based on correlation. We consider a constrained vMF distribution for block modeling, which provides a probabilistic model of an asymmetric biclustering method that uses correlation as the similarity measure of data features. We derive the variational Bayesian inference algorithm for the mixture of the constrained vMF distributions. It is applied to a multivariate data visualization method implemented with enhanced parallel coordinate plots.

  10. Constrained caloric curves and phase transition for hot nuclei

    CERN Document Server

    Borderie, Bernard; Rivet, M F; Raduta, Ad R; Ademard, G; Bonnet, E; Bougault, R; Chbihi, A; Frankland, J D; Galichet, E; Gruyer, D; Guinet, D; Lautesse, P; Neindre, N Le; Lopez, O; Marini, P; Parlog, M; Pawlowski, P; Rosato, E; Roy, R; Vigilante, M

    2013-01-01

    Simulations based on experimental data obtained from multifragmenting quasi-fused nuclei produced in central $^{129}$Xe + $^{nat}$Sn collisions have been used to deduce event by event freeze-out properties in the thermal excitation energy range 4-12 AMeV [Nucl. Phys. A809 (2008) 111]. From these properties and the temperatures deduced from proton transverse momentum fluctuations, constrained caloric curves have been built. At constant average volumes caloric curves exhibit a monotonic behaviour whereas for constrained pressures a backbending is observed. Such results support the existence of a first order phase transition for hot nuclei.

  11. Fast Energy Minimization of large Polymers Using Constrained Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Todd D. Plantenga

    1998-10-01

    A new computational technique is described that uses distance constraints to calculate empirical potential energy minima of partially rigid molecules. A constrained minimuzation algorithm that works entirely in Cartesian coordinates is used. The algorithm does not obey the constraints until convergence, a feature that reduces ill-conditioning and allows constrained local minima to be computed more quickly than unconstrained minima. Computational speedup exceeds the 3-fold factor commonly obtained in constained molecular dynamics simulations, where the constraints must be strictly obeyed at all times.

  12. Fermi Constrains Dark Matter Origin of High Energy Positron Anomaly

    OpenAIRE

    Pohl, M.; Eichler, D.

    2009-01-01

    Fermi measurements of the high-latitude gamma-ray background strongly constrain a decaying-dark-matter origin for the 1--100 GeV Galactic positron anomaly measured with PAMELA. Inverse-Compton scattering of the microwave background by the emergent positrons produces a bump in the diffuse 100-200 MeV gamma-ray background that would protrude from the observed background at these energies. The positrons are thus constrained to emerge from the decay process at a typical energy between ~100 GeV an...

  13. A lexicographic approach to constrained MDP admission control

    Science.gov (United States)

    Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo

    2016-02-01

    This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.

  14. Constrained modes in control theory - Transmission zeros of uniform beams

    Science.gov (United States)

    Williams, T.

    1992-01-01

    Mathematical arguments are presented demonstrating that the well-established control system concept of the transmission zero is very closely related to the structural concept of the constrained mode. It is shown that the transmission zeros of a flexible structure form a set of constrained natural frequencies for it, with the constraints depending explicitly on the locations and the types of sensors and actuators used for control. Based on this formulation, an algorithm is derived and used to produce dimensionless plots of the zero of a uniform beam with a compatible sensor/actuator pair.

  15. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  16. Constraining Stochastic Inversion with Frequency Domain Seismic Signature for Seismically Thin-bed Interpretation

    International Nuclear Information System (INIS)

    An alternative technique in interpreting thin-bed structure has been developed. The technique involved stochastic inversions which use frequency domain energy spectral attribute as a constraint instead of time domain seismic amplitude. Maximum Amplitude Weighed Integrated Energy Spectra is a proposed energy spectral attribute which was used to constrain the stochastic process. Amplitude Weighed Integrated Energy Spectra is a deployed seismic attribute obtained by multiplying integrated energy spectra with maximum amplitude of a seismic trace. It is shown that Amplitude Weighed Integrated Energy Spectra provides a more separable signature in responding to bed thickness changes than seismic signature. A lower degree of ambiguity of Amplitude Weighed Integrated Energy Spectra in sensing thin-bed seismic is a potential method of reducing thin bed interpretation uncertainty. Qualitatively, Amplitude Weighed Integrated Energy Spectra is capable of showing one of the reported very thin meandered channel complexes of gas reservoir of Stratton field which is difficult to be seen in seismic amplitude. In this research, Amplitude Weighed Integrated Energy Spectra is incorporated in a stochastic seismic inversion to improve both accuracy and precision (certainty) of thin-bed interpretation. The signature of Amplitude Weighed Integrated Energy Spectra is used to constrain the degree of match (likelihood) between seismic model and data. Synthetic data testing shows that the proposed method significantly improves both accuracy and precision of a single wedge model seismic inversion. The thickness and reflection coefficient are estimated more accurately, although limited information is used. The proposed method was tested to invert a structurally subtle gas production zone of Stratton field. Confirmed by well log data, a cross section of inverted impedance showed that some channel complex structure of gas reservoirs are able to be imaged

  17. Asymptotics for maximum score method under general conditions

    OpenAIRE

    Taisuke Otsu; Myung Hwan Seo

    2014-01-01

    Abstract. Since Manski's (1975) seminal work, the maximum score method for discrete choice models has been applied to various econometric problems. Kim and Pollard (1990) established the cube root asymptotics for the maximum score estimator. Since then, however, econometricians posed several open questions and conjectures in the course of generalizing the maximum score approach, such as (a) asymptotic distribution of the conditional maximum score estimator for a panel data dynamic discrete ch...

  18. Solar Panel Maximum Power Point Tracker for Power Utilities

    OpenAIRE

    Sandeep Banik,; Dr P.K.Saha

    2014-01-01

    ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV) array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT) is needed to operate the PV array at its maximum power point. The objective of this thesis...

  19. The maximum of Brownian motion with parabolic drift

    OpenAIRE

    Janson, Svante; Louchard, Guy; Martin-Löf, Anders

    2010-01-01

    We study the maximum of a Brownian motion with a parabolic drift; this is a random variable that often occurs as a limit of the maximum of discrete processes whose expectations have a maximum at an interior point. We give new series expansions and integral formulas for the distribution and the first two moments, together with numerical values to high precision.

  20. 40 CFR 94.107 - Determination of maximum test speed.

    Science.gov (United States)

    2010-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  1. 14 CFR 25.1505 - Maximum operating limit speed.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  2. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  3. 7 CFR 4290.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... Financing of Enterprises by RBICs Structuring Rbic Financing of Eligible Enterprises-Types of Financings § 4290.840 Maximum term of Financing. The maximum term of any Debt Security must be no longer than 20... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum term of Financing. 4290.840 Section...

  4. Maximum Performance Tests in Children with Developmental Spastic Dysarthria.

    Science.gov (United States)

    Wit, J.; And Others

    1993-01-01

    Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…

  5. The Power and Robustness of Maximum LOD Score Statistics

    OpenAIRE

    YOO, Y. J.; MENDELL, N.R.

    2008-01-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value.

  6. Geographic variation of surface energy partitioning in the climatic mean predicted from the maximum power limit

    CERN Document Server

    Dhara, Chirag; Kleidon, Axel

    2015-01-01

    Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.

  7. Maximum demand power control system design status of PEFP

    Energy Technology Data Exchange (ETDEWEB)

    Mun, Kyeong Jun; Jung, Hoi Won; Park, Sung Sik; Song, In Teak; Kim, Jun Yeon [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The purpose of maximum demand power control system is to not exceed the limit of the maximum demand pre defined. To limit the maximum demand, non critical loads are controlled or disconnected when the limit is about to be exceeded. Maximum demand power generally occurs during summer; especially cooling period in each building (from 10 am to 4 pm). At this period, possible electric load should be controlled or disconnected to save electric energy while not affect the main processes. In this paper, we described maximum demand power control system designed status of PEFP.

  8. Maximum demand power control system design status of PEFP

    International Nuclear Information System (INIS)

    The purpose of maximum demand power control system is to not exceed the limit of the maximum demand pre defined. To limit the maximum demand, non critical loads are controlled or disconnected when the limit is about to be exceeded. Maximum demand power generally occurs during summer; especially cooling period in each building (from 10 am to 4 pm). At this period, possible electric load should be controlled or disconnected to save electric energy while not affect the main processes. In this paper, we described maximum demand power control system designed status of PEFP

  9. Testing a Constrained MPC Controller in a Process Control Laboratory

    Science.gov (United States)

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  10. Bayesian item selection in constrained adaptive testing using shadow tests

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specificati

  11. Constrained Local UniversE Simulations: A Local Group Factory

    CERN Document Server

    Carlesi, Edoardo; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I; Pilipenko, Sergey V; Knebe, Alexander; Courtois, Helene; Tully, R Brent; Steinmetz, Matthias

    2016-01-01

    Near field cosmology is practiced by studying the Local Group (LG) and its neighbourhood. The present paper describes a framework for simulating the near field on the computer. Assuming the LCDM model as a prior and applying the Bayesian tools of the Wiener filter (WF) and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the LCDM scenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of halos must obey specific isolation, mass and separation criteria. At the second level the orbital angular momentum and energy are constrained and on the third one the phase of the orbit is constrained. Out of the 300 constrai...

  12. Reflections on How Color Term Acquisition Is Constrained

    Science.gov (United States)

    Pitchford, Nicola J.

    2006-01-01

    Compared with object word learning, young children typically find learning color terms to be a difficult linguistic task. In this reflections article, I consider two questions that are fundamental to investigations into the developmental acquisition of color terms. First, I consider what constrains color term acquisition and how stable these…

  13. Revenue Prediction in Budget-constrained Sequential Auctions with Complementarities

    NARCIS (Netherlands)

    S. Verwer (Sicco); Y. Zhang (Yingqian)

    2011-01-01

    textabstractWhen multiple items are auctioned sequentially, the ordering of auctions plays an important role in the total revenue collected by the auctioneer. This is true especially with budget constrained bidders and the presence of complementarities among items. In such sequential auction setting

  14. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given f...

  15. On the Integrated Job Scheduling and Constrained Network Routing Problem

    DEFF Research Database (Denmark)

    Gamst, Mette

    This paper examines the NP-hard problem of scheduling a number of jobs on a finite set of machines such that the overall profit of executed jobs is maximized. Each job demands a number of resources, which must be sent to the executing machine via constrained paths. Furthermore, two resource demand...

  16. Gauge transformations in Dirac theory of constrained systems

    International Nuclear Information System (INIS)

    An analysis of some aspects of gauge transformations in the context of Dirac's theory of constrained systems is made. A generator for gauge transformations is constructed by comparing phase space trajectories with the same initial data but different choices of the arbitrary functions. An application to the Yang-Mills theory is performed. (L.C.)

  17. Multiply-Constrained Semantic Search in the Remote Associates Test

    Science.gov (United States)

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  18. Constrained Superfields and Standard Realization of Nonlinear Supersymmetry

    OpenAIRE

    LUO, HUI; Luo, Mingxing; Zheng, Sibo

    2009-01-01

    A constrained superfield formalism has been proposed recently to analyze the low energy physics related to Goldstinos. We prove that this formalism can be reformulated in the language of standard realization of nonlinear supersymmetry. New relations have been uncovered in the standard realization of nonlinear supersymmetry.

  19. Bounds on the capacity of constrained two-dimensional codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2000-01-01

    Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run...

  20. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    Science.gov (United States)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  1. Adaptive double chain quantum genetic algorithm for constrained optimization problems

    Institute of Scientific and Technical Information of China (English)

    Kong Haipeng; Li Ni; Shen Yuzhong

    2015-01-01

    Optimization problems are often highly constrained and evolutionary algorithms (EAs) are effective methods to tackle this kind of problems. To further improve search efficiency and con-vergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA) for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation) functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE) are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP), adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem.

  2. Applications of a Constrained Mechanics Methodology in Economics

    Science.gov (United States)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  3. Reserve-constrained economic dispatch: Cost and payment allocations

    Energy Technology Data Exchange (ETDEWEB)

    Misraji, Jaime [Sistema Electrico Nacional Interconectado de la Republica Dominicana, Calle 3, No. 3, Arroyo Hondo 1, Santo Domingo, Distrito Nacional (Dominican Republic); Conejo, Antonio J.; Morales, Juan M. [Department of Electrical Engineering, Universidad de Castilla-La Mancha, Campus Universitario s/n, 13071 Ciudad Real (Spain)

    2008-05-15

    This paper extends basic economic dispatch analytical results to the reserve-constrained case. For this extended problem, a cost and payment allocation analysis is carried out and a detailed economic interpretation of the results is provided. Sensitivity values (Lagrange multipliers) are also analyzed. A case study is considered to illustrate the proposed analysis. Conclusions are duly drawn. (author)

  4. How Well Can Future CMB Missions Constrain Cosmic Inflation?

    CERN Document Server

    Martin, Jerome; Vennin, Vincent

    2014-01-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from $10^{-1}$ down to $10^{-7}$. We then compute the Bayesian evidences and complexities of all Encyclopaedia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a...

  5. Constraining dark matter properties with Cosmic Microwave Background observations

    CERN Document Server

    Thomas, Daniel B; Skordis, Constantinos

    2016-01-01

    We examine how the properties of dark matter, parameterised by an equation of state parameter $w$ and two perturbative Generalised Dark Matter (GDM) parameters $c^2_s$ (the sound speed) and $c^2_\\text{vis}$ (the viscosity), are constrained by existing cosmological data, particularly the Planck 2015 data release. We find that the GDM parameters are consistent with zero, and are strongly constrained, showing no evidence for extending the dark matter model beyond the Cold Dark Matter (CDM) paradigm. The dark matter equation of state is constrained to be within $-0.000896constrained to be less than $3.21\\times10^{-6}$ and $6.06\\times10^{-6}$ respectively at the $3\\sigma$ level. The inclusion of the GDM parameters does significantly affect the error bars on several $\\Lambda$CDM parameters, notably the dimensionless dark matter density $\\omega_g$...

  6. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    Science.gov (United States)

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  7. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...

  8. Constrained variational calculus: the second variation (part I)

    CERN Document Server

    Massa, Enrico; Pagani, Enrico; Luria, Gianvittorio

    2010-01-01

    This paper is a direct continuation of arXiv:0705.2362 . The Hamiltonian aspects of the theory are further developed. Within the framework provided by the first paper, the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A necessary and sufficient condition for minimality is proved.

  9. Constrained control of a once-through boiler with recirculation

    DEFF Research Database (Denmark)

    Trangbæk, K

    2008-01-01

    There is an increasing need to operate power plants at low load for longer periods of time. When a once-through boiler operates at a sufficiently low load, recirculation is introduced, significantly altering the control structure. This paper illustrates the possibilities for using constrained con...

  10. Inferring meaningful communities from topology-constrained correlation networks.

    Directory of Open Access Journals (Sweden)

    Jose Sergio Hleap

    Full Text Available Community structure detection is an important tool in graph analysis. This can be done, among other ways, by solving for the partition set which optimizes the modularity scores [Formula: see text]. Here it is shown that topological constraints in correlation graphs induce over-fragmentation of community structures. A refinement step to this optimization based on Linear Discriminant Analysis (LDA and a statistical test for significance is proposed. In structured simulation constrained by topology, this novel approach performs better than the optimization of modularity alone. This method was also tested with two empirical datasets: the Roll-Call voting in the 110th US Senate constrained by geographic adjacency, and a biological dataset of 135 protein structures constrained by inter-residue contacts. The former dataset showed sub-structures in the communities that revealed a regional bias in the votes which transcend party affiliations. This is an interesting pattern given that the 110th Legislature was assumed to be a highly polarized government. The [Formula: see text]-amylase catalytic domain dataset (biological dataset was analyzed with and without topological constraints (inter-residue contacts. The results without topological constraints showed differences with the topology constrained one, but the LDA filtering did not change the outcome of the latter. This suggests that the LDA filtering is a robust way to solve the possible over-fragmentation when present, and that this method will not affect the results where there is no evidence of over-fragmentation.

  11. Evaluating potentialities and constrains of Problem Based Learning curriculum

    DEFF Research Database (Denmark)

    Guerra, Aida

    2013-01-01

    This paper presents a research design to evaluate Problem Based Learning (PBL) curriculum potentialities and constrains for future changes. PBL literature lacks examples of how to evaluate and analyse established PBL learning environments to address new challenges posed. The research design...

  12. Linear Programming Relaxations of Quadratically Constrained Quadratic Programs

    OpenAIRE

    Qualizza, Andrea; Belotti, Pietro; Margot, Francois

    2012-01-01

    We investigate the use of linear programming tools for solving semidefinite programming relaxations of quadratically constrained quadratic problems. Classes of valid linear inequalities are presented, including sparse PSD cuts, and principal minors PSD cuts. Computational results based on instances from the literature are presented.

  13. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  14. Constrained Transport vs. Divergence Cleanser Options in Astrophysical MHD Simulations

    Science.gov (United States)

    Lindner, Christopher C.; Fragile, P.

    2009-01-01

    In previous work, we presented results from global numerical simulations of the evolution of black hole accretion disks using the Cosmos++ GRMHD code. In those simulations we solved the magnetic induction equation using an advection-split form, which is known not to satisfy the divergence-free constraint. To minimize the build-up of divergence error, we used a hyperbolic cleanser function that simultaneously damped the error and propagated it off the grid. We have since found that this method produces qualitatively and quantitatively different behavior in high magnetic field regions than results published by other research groups, particularly in the evacuated funnels of black-hole accretion disks where Poynting-flux jets are reported to form. The main difference between our earlier work and that of our competitors is their use of constrained-transport schemes to preserve a divergence-free magnetic field. Therefore, to study these differences directly, we have implemented a constrained transport scheme into Cosmos++. Because Cosmos++ uses a zone-centered, finite-volume method, we can not use the traditional staggered-mesh constrained transport scheme of Evans & Hawley. Instead we must implement a more general scheme; we chose the Flux-CT scheme as described by Toth. Here we present comparisons of results using the divergence-cleanser and constrained transport options in Cosmos++.

  15. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    mimicking a SAR/MAR attachment. We used this construct as a torsionally constrained template for transcription of the beta-lactamase gene by Escherichia coli RNAP and found that RNA synthesis displays similar characteristics in terms of rate of elongation whether or not the template is torsionally...

  16. A generic statistical methodology to predict the maximum pit depth of a localized corrosion process

    International Nuclear Information System (INIS)

    Highlights: → We propose a methodology to predict the maximum pit depth in a corrosion process. → Generalized Lambda Distribution and the Computer Based Bootstrap Method are combined. → GLD fit a large variety of distributions both in their central and tail regions. → Minimum thickness preventing perforation can be estimated with a safety margin. → Considering its applications, this new approach can help to size industrial pieces. - Abstract: This paper outlines a new methodology to predict accurately the maximum pit depth related to a localized corrosion process. It combines two statistical methods: the Generalized Lambda Distribution (GLD), to determine a model of distribution fitting with the experimental frequency distribution of depths, and the Computer Based Bootstrap Method (CBBM), to generate simulated distributions equivalent to the experimental one. In comparison with conventionally established statistical methods that are restricted to the use of inferred distributions constrained by specific mathematical assumptions, the major advantage of the methodology presented in this paper is that both the GLD and the CBBM enable a statistical treatment of the experimental data without making any preconceived choice neither on the unknown theoretical parent underlying distribution of pit depth which characterizes the global corrosion phenomenon nor on the unknown associated theoretical extreme value distribution which characterizes the deepest pits. Considering an experimental distribution of depths of pits produced on an aluminium sample, estimations of maximum pit depth using a GLD model are compared to similar estimations based on usual Gumbel and Generalized Extreme Value (GEV) methods proposed in the corrosion engineering literature. The GLD approach is shown having smaller bias and dispersion in the estimation of the maximum pit depth than the Gumbel approach both for its realization and mean. This leads to comparing the GLD approach to the GEV one

  17. Present and Last Glacial Maximum climates as states of maximum entropy production

    CERN Document Server

    Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere

    2011-01-01

    The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...

  18. Constrained correlation dynamics of SU(N) gauge theories in canonical form. Pt.2. Gauge constrained conditions

    International Nuclear Information System (INIS)

    Gauge constrained conditions and quantization of SU(N) gauge theories are analysed by means of Dirac's formalism. In the framework of algebraic dynamics, gauge invariance, Gauss law and Ward identities are discussed. With use of the version of conservation law in correlation dynamics, the conserved Gauss law and Ward identities related to residual gauge invariance can be transformed into initial value problems

  19. CONSTRAINING TYPE Ia SUPERNOVA MODELS: SN 2011fe AS A TEST CASE

    International Nuclear Information System (INIS)

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs (WDs)—realizations of explosion models appropriate for two of the most widely discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the 55Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect.

  20. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  1. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  2. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)

    2009-02-19

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic

  3. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  4. The maximum of Brownian motion with parabolic drift (Extended abstract)

    OpenAIRE

    Janson, Svante; Louchard, Guy; Martin-Löf, Anders

    2010-01-01

    We study the maximum of a Brownian motion with a parabolic drift; this is a random variable that often occurs as a limit of the maximum of discrete processes whose expectations have a maximum at an interior point. This has some applications in algorithmic and data structures analysis. We give series expansions and integral formulas for the distribution and the first two moments, together with numerical values to high precision.

  5. The maximum clique enumeration problem: algorithms, applications, and implementations

    OpenAIRE

    Eblen John D; Phillips Charles A; Rogers Gary L; Langston Michael A

    2012-01-01

    Abstract Background The maximum clique enumeration (MCE) problem asks that we identify all maximum cliques in a finite, simple graph. MCE is closely related to two other well-known and widely-studied problems: the maximum clique optimization problem, which asks us to determine the size of a largest clique, and the maximal clique enumeration problem, which asks that we compile a listing of all maximal cliques. Naturally, these three problems are NP-hard, given that they subsume the classic ver...

  6. Entropy Bounds for Constrained Two-Dimensional Fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Justesen, Jørn

    1999-01-01

    The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived.......The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived....

  7. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    CERN Document Server

    Kramar, Maxim; Lin, Haosheng

    2016-01-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from $1.5$ to $4\\ \\mathrm{R}_\\odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 \\AA \\ band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $\\sim 2.5 \\ \\mathrm{R}_\\odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the a...

  8. Broad climatological variation of surface energy balance partitioning across land and ocean predicted from the maximum power limit

    Science.gov (United States)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2016-07-01

    Longwave radiation and turbulent heat fluxes are the mechanisms by which the Earth's surface transfers heat into the atmosphere, thus affecting the surface temperature. However, the energy partitioning between the radiative and turbulent components is poorly constrained by energy and mass balances alone. We use a simple energy balance model with the thermodynamic limit of maximum power as an additional constraint to determine this partitioning. Despite discrepancies over tropical oceans, we find that the broad variation of heat fluxes and surface temperatures in the ERA-Interim reanalyzed observations can be recovered from this approach. The estimates depend considerably on the formulation of longwave radiative transfer, and a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates. Our results suggest that the steady state surface energy partitioning may reflect the maximum power constraint.

  9. Constraining the Charm Yukawa and Higgs-quark Universality

    CERN Document Server

    Perez, Gilad; Stamou, Emmanuel; Tobioka, Kohsaku

    2015-01-01

    We introduce four different types of data-driven analyses with different level of robustness that constrain the size of the Higgs-charm Yukawa coupling: (i) recasting the vector-boson associated, Vh, analyses that search for bottom-pair final state. We use this mode to directly and model independently constrain the Higgs to charm coupling, y_c/y_c^{SM} J/\\psi\\gamma, y_c/y_c^{SM} < 220; (iv) a global fit to the Higgs signal strengths, y_c/y_c^{SM} < 6.2. A comparison with t\\bar{t}h data allows us to show that current data eliminates the possibility that the Higgs couples to quarks in a universal way, as is consistent with the Standard Model (SM) prediction. Finally, we demonstrate how the experimental collaborations can further improve our direct bound by roughly an order of magnitude by charm-tagging, as already used in new physics searches.

  10. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids.

    Science.gov (United States)

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-05-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution. PMID:25948565

  11. Applications of a constrained mechanics methodology in economics

    CERN Document Server

    Janová, Jitka

    2011-01-01

    The paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for the undergraduate physics education. The aim of the paper is: 1. to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even on the undergraduate level and 2. to enable the students to understand deeper the principles and methods routinely used in mechanics by looking at the well known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity dependent constraint using the vakonomic approa...

  12. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...... of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization...

  13. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  14. Matter coupling in partially constrained vielbein formulation of massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Felice, Antonio De [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Gümrükçüoğlu, A. Emir [School of Mathematical Sciences, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); Heisenberg, Lavinia [Institute for Theoretical Studies, ETH Zurich,Clausiusstrasse 47, 8092 Zurich (Switzerland); Mukohyama, Shinji [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Kavli Institute for the Physics and Mathematics of the Universe,Todai Institutes for Advanced Study, University of Tokyo (WPI),5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan)

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  15. Origin of Constrained Maximal CP Violation in Flavor Symmetry

    CERN Document Server

    He, Hong-Jian; Xu, Xun-Jie

    2015-01-01

    Current data from neutrino oscillation experiments are in good agreement with $\\delta=-\\pi/2$ and $\\theta_{23} = \\pi/4$. We define the notion of "constrained maximal CP violation" for these features and study their origin in flavor symmetry models. We give various parametrization-independent definitions of constrained maximal CP violation and present a theorem on how it can be generated. This theorem takes advantage of residual symmetries in the neutrino and charged lepton mass matrices, and states that, up to a few exceptions, $\\delta=\\pm\\pi/2$ and $\\theta_{23} = \\pi/4$ are generated when those symmetries are real. The often considered $\\mu$-$\\tau$ reflection symmetry, as well as specific discrete subgroups of $O(3)$, are special case of our theorem.

  16. A second-generation constrained reaction volume shock tube.

    Science.gov (United States)

    Campbell, M F; Tulgestke, A M; Davidson, D F; Hanson, R K

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance. PMID:24880416

  17. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.;

    2014-01-01

    A main characteristic of the Internet of Things networks is the large number of resource-constrained nodes, which, however, are required to perform reliable and fast data exchange; often of critical nature; over highly unpredictable and dynamic connections and network topologies. Reducing the...... number of message exchanges and retransmission of data, while guaranteeing the lifetime of the data session duration as per service requirements are vital for enabling scenarios such as smart home, intelligent transportation systems, eHealth, etc. This paper proposes a novel theoretical model for the...... discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions in...

  18. Fast Subspace Tracking Algorithm Based on the Constrained Projection Approximation

    Directory of Open Access Journals (Sweden)

    Amir Valizadeh

    2009-01-01

    Full Text Available We present a new algorithm for tracking the signal subspace recursively. It is based on an interpretation of the signal subspace as the solution of a constrained minimization task. This algorithm, referred to as the constrained projection approximation subspace tracking (CPAST algorithm, guarantees the orthonormality of the estimated signal subspace basis at each iteration. Thus, the proposed algorithm avoids orthonormalization process after each update for postprocessing algorithms which need an orthonormal basis for the signal subspace. To reduce the computational complexity, the fast CPAST algorithm is introduced which has O(nr complexity. In addition, for tracking the signal sources with abrupt change in their parameters, an alternative implementation of the algorithm with truncated window is proposed. Furthermore, a signal subspace rank estimator is employed to track the number of sources. Various simulation results show good performance of the proposed algorithms.

  19. Lilith: a tool for constraining new physics from Higgs measurements

    Science.gov (United States)

    Bernon, Jérémy; Dumont, Béranger

    2015-09-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.

  20. ConStrains identifies microbial strains in metagenomic datasets.

    Science.gov (United States)

    Luo, Chengwei; Knight, Rob; Siljander, Heli; Knip, Mikael; Xavier, Ramnik J; Gevers, Dirk

    2015-10-01

    An important fraction of microbial diversity is harbored in strain individuality, so identification of conspecific bacterial strains is imperative for improved understanding of microbial community functions. Limitations in bioinformatics and sequencing technologies have to date precluded strain identification owing to difficulties in phasing short reads to faithfully recover the original strain-level genotypes, which have highly similar sequences. We present ConStrains, an open-source algorithm that identifies conspecific strains from metagenomic sequence data and reconstructs the phylogeny of these strains in microbial communities. The algorithm uses single-nucleotide polymorphism (SNP) patterns in a set of universal genes to infer within-species structures that represent strains. Applying ConStrains to simulated and host-derived datasets provides insights into microbial community dynamics. PMID:26344404

  1. Solving the constrained shortest path problem using random search strategy

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we propose an improved walk search strategy to solve the constrained shortest path problem. The proposed search strategy is a local search algorithm which explores a network by walker navigating through the network. In order to analyze and evaluate the proposed search strategy, we present the results of three computational studies in which the proposed search algorithm is tested. Moreover, we compare the proposed algorithm with the ant colony algorithm and k shortest paths algorithm. The analysis and comparison results demonstrate that the proposed algorithm is an effective tool for solving the constrained shortest path problem. It can not only be used to solve the optimization problem on a larger network, but also is superior to the ant colony algorithm in terms of the solution time and optimal paths.

  2. How inhibitory cues can both constrain and promote cell migration.

    Science.gov (United States)

    Bronner, Marianne E

    2016-06-01

    Collective cell migration is a common feature in both embryogenesis and metastasis. By coupling studies of neural crest migration in vivo and in vitro with mathematical modeling, Szabó et al. (2016, J. Cell Biol., http://dx.doi.org/10.1083/jcb.201602083) demonstrate that the proteoglycan versican forms a physical boundary that constrains neural crest cells to discrete streams, in turn facilitating their migration. PMID:27269064

  3. From global fits of neutrino data to constrained sequential dominance

    CERN Document Server

    Björkeroth, Fredrik

    2014-01-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We perform a global analysis on a class of CSD($n$) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  4. Homework and performance for time-constrained students

    OpenAIRE

    William Neilson

    2005-01-01

    Most studies of homework effectiveness relate time spent on homework to test performance, and find a nonmonotonic relationship. A theoretical model shows that this can occur even when additional homework helps all students because of the way in which variables are defined. However, some students are time-constrained, limiting the amount of homework they can complete. In the presence of time constraints, additional homework can increase the spread between the performance of the best and worst ...

  5. Modularity-Based Clustering for Network-Constrained Trajectories

    OpenAIRE

    EL MAHRSI, Mohamed Khalil; Rossi, Fabrice

    2012-01-01

    We present a novel clustering approach for moving object trajectories that are constrained by an underlying road network. The approach builds a similarity graph based on these trajectories then uses modularity-optimization hiearchical graph clustering to regroup trajectories with similar profiles. Our experimental study shows the superiority of the proposed approach over classic hierarchical clustering and gives a brief insight to visualization of the clustering results.

  6. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids

    OpenAIRE

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-01-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the ...

  7. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  8. Single Machine Scheduling Problem of Resource Constrained Under Chains Constraints

    OpenAIRE

    Jin, Ji

    2010-01-01

    In the paper, we discuss the following resource constrained scheduling problem 1|chains, *, Whether the chains nonpreemptive or not, two heuristic algorithms are given. For an given permutation and resource alloction of this permutation, by caculating priority factor of every chain, we can find another new permutation according to the increasingly priority factor. If the new permutation is different from the old one, for the new permutation, calculating resource allocation and priority facto...

  9. Constraining a halo model for cosmological neutral hydrogen

    OpenAIRE

    Padmanabhan, Hamsa; Refregier, Alexandre

    2016-01-01

    We describe a combined halo model to constrain the distribution of neutral hydrogen (HI) in the post-reionization universe. We combine constraints from the various probes of HI at different redshifts: the low-redshift 21-cm emission line surveys, intensity mapping experiments at intermediate redshifts, and the Damped Lyman-Alpha (DLA) observations at higher redshifts. We use a Markov Chain Monte Carlo (MCMC) approach to combine the observations and place constraints on the free parameters in ...

  10. Energy Constrained Wireless Sensor Networks : Communication Principles and Sensing Aspects

    OpenAIRE

    Björnemo, Erik

    2009-01-01

    Wireless sensor networks are attractive largely because they need no wired infrastructure. But precisely this feature makes them energy constrained, and the consequences of this hard energy constraint are the overall topic of this thesis. We are in particular concerned with principles for energy efficient wireless communication and the energy-wise trade-off between sensing and radio communication. Radio transmission between sensors incurs both a fixed energy cost from radio circuit processing...

  11. Revenue Prediction in Budget-constrained Sequential Auctions with Complementarities

    OpenAIRE

    Verwer, Sicco; Zhang, Yingqian

    2011-01-01

    textabstractWhen multiple items are auctioned sequentially, the ordering of auctions plays an important role in the total revenue collected by the auctioneer. This is true especially with budget constrained bidders and the presence of complementarities among items. In such sequential auction settings, it is difficult to develop efficient algorithms for finding an optimal sequence of items that optimizes the revenue of the auctioneer. However, when historical data are available, it is possible...

  12. Nucleosome breathing and remodeling constrain CRISPR-Cas9 function.

    OpenAIRE

    Isaac, RS; Jiang, F; Doudna, JA; Lim, WA; Narlikar, GJ; De Almeida, R

    2016-01-01

    The CRISPR-Cas9 bacterial surveillance system has become a versatile tool for genome editing and gene regulation in eukaryotic cells, yet how CRISPR-Cas9 contends with the barriers presented by eukaryotic chromatin is poorly understood. Here we investigate how the smallest unit of chromatin, a nucleosome, constrains the activity of the CRISPR-Cas9 system. We find that nucleosomes assembled on native DNA sequences are permissive to Cas9 action. However, the accessibility of nucleosomal DNA to ...

  13. A Note on Optimal Care by Wealth-Constrained Injurers

    OpenAIRE

    Thomas J. Miceli; Kathleen Segerson

    2001-01-01

    This paper clarifies the relationship between an injurer's wealth level and his care choice by highlighting the distinction between monetary and non-monetary care. When care is non-monetary, wealth-constrained injurers generally take less than optimal care, and care is increasing in their wealth level under both strict liability and negligence. In contrast, when care is monetary, injurers may take too much or too little care under strict liability, and care is not strictly increasing in injur...

  14. Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty

    Directory of Open Access Journals (Sweden)

    Ke-wei Ding

    2014-01-01

    Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.

  15. On a constrained 2-D Navier-Stokes equation

    OpenAIRE

    Caglioti, E.; Pulvirenti, M.; F. Rousset

    2008-01-01

    The planar Navier-Stokes equation exhibits, in absence of external forces, a trivial asymptotics in time. Nevertheless the appearence of coherent structures suggests non-trivial intermediate asymptotics which should be explained in terms of the equation itself. Motivated by the separation of the different time scales observed in the dynamics of the Navier-Stokes equation, we study the well-posedness and asymptotic behaviour of a constrained equation which neglects the variation of the energy ...

  16. Constraining neutron star tidal Love numbers with gravitational wave detectors

    OpenAIRE

    Flanagan, Eanna E.; Hinderer, Tanja

    2007-01-01

    Ground-based gravitational wave detectors may be able to constrain the nuclear equation of state using the early, low frequency portion of the signal of detected neutron star - neutron star inspirals. In this early adiabatic regime, the influence of a neutron star's internal structure on the phase of the waveform depends only on a single parameter lambda of the star related to its tidal Love number, namely the ratio of the induced quadrupole moment to the perturbing tidal gravitational field....

  17. Quantum cosmology of a classically constrained nonsingular Universe

    OpenAIRE

    Sanyal, Abhik Kumar

    2009-01-01

    The quantum cosmological version of a nonsingular Universe presented by Mukhanov and Brandenberger in the early nineties has been developed and the Hamilton Jacobi equation has been found under semiclassical (WKB) approximation. It has been pointed out that, parameterization of classical trajectories with semiclassical time parameter, for such a classically constrained system, is a nontrivial task and requires Lagrangian formulation rather than the Hamiltonian formalism.

  18. Reduced order constrained optimization (ROCO): Clinical application to lung IMRT

    OpenAIRE

    Stabenau, Hans; Rivera, Linda; Yorke, Ellen; Yang, Jie; Lu, Renzhi; Richard J. Radke; Jackson, Andrew

    2011-01-01

    Purpose: The authors use reduced-order constrained optimization (ROCO) to create clinically acceptable IMRT plans quickly and automatically for advanced lung cancer patients. Their new ROCO implementation works with the treatment planning system and full dose calculation used at Memorial Sloan-Kettering Cancer Center (MSKCC). The authors have implemented mean dose hard constraints, along with the point-dose and dose-volume constraints that the authors used for our previous work on the prostat...

  19. EXIT-constrained BICM-ID Design using Extended Mapping

    OpenAIRE

    Fukawa, Kisho; Ormsub, Soulisak; Tölli, Antti; Anwar, Khoirul; Matsumoto, Tad

    2012-01-01

    This article proposes a novel design framework, EXIT-constrained binary switching algorithm (EBSA), for achieving near Shannon limit performance with single parity check and irregular repetition coded bit interleaved codedmodulation and iterative detection with extended mapping (SI-BICM-ID-EM). EBSA is composed of node degree allocation optimization using linear programming (LP) and labeling optimization based on adaptive binary switching algorithm jointly. This technique achieves exact match...

  20. An Unsplit Godunov Method for Ideal MHD via Constrained Transport

    OpenAIRE

    Gardiner, Thomas A.; Stone, James M

    2005-01-01

    We describe a single step, second-order accurate Godunov scheme for ideal MHD based on combining the piecewise parabolic method (PPM) for performing spatial reconstruction, the corner transport upwind (CTU) method of Colella for multidimensional integration, and the constrained transport (CT) algorithm for preserving the divergence-free constraint on the magnetic field. We adopt the most compact form of CT, which requires the field be represented by area-averages at cell faces. We demonstrate...

  1. Towards Dynamic Camera Calibration for Constrained Flexible Mirror Imaging

    OpenAIRE

    Dunne, Aubrey K.; Mallon, John; Whelan, Paul F.

    2008-01-01

    Flexible mirror imaging systems consisting of a perspective camera viewing a scene reflected in a flexible mirror can provide direct control over image field-of-view and resolution. However, calibration of such systems is difficult due to the vast range of possible mirror shapes and the flexible nature of the system. This paper proposes the fundamentals of a dynamic calibration approach for flexible mirror imaging systems by examining the constrained case of single dimensional flexing. ...

  2. Conduit flow experiments help constraining the regime of explosive eruptions

    OpenAIRE

    Dellino, P.; Università di Bari; Dioguardi, F.; Università di Bari; Zimanowski, B.; University of Wurzburg; Buttner, R.; University of Wurzburg; Mele, D.; Università di Bari; La Volpe, L.; Università di Bari; Sulpizio, R.; Università di Bari, Centro Interdipartimentale per il Rischio Sismico e Vulcanico, c/o Dip.to Geomineralogico; Doronzo, D. M.; Università di Bari; Sonder, I.; University of Wurzburg; Bonasia, R.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione OV, Napoli, Italia; Calvari, S.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Catania, Catania, Italia; Marotta, E.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione OV, Napoli, Italia

    2009-01-01

    It is currently impractical to measure what happens in a volcano during an explosive eruption, and up to now much of our knowledge depends on theoretical models. Here we show, by means of large-scale experiments, that the regime of explosive events can be constrained based on the characteristics of magma at the point of fragmentation and conduit geometry. Our model, whose results are consistent with the literature, is a simple tool for defining the conditions at conduit exit th...

  3. Throughput constrained parallelism reduction in cyclo-static dataflow applications

    OpenAIRE

    Carpov, Sergiu; Cudennec, Loïc; Sirdey, Renaud

    2013-01-01

    This paper deals with semantics-preserving parallelism reduction methods for cyclo-static dataflow applications. Parallelism reduction is the process of equivalent actors fusioning. The principal objectives of parallelism reduction are to decrease the memory footprint of an application and to increase its execution performance. We focus on parallelism reduction methodologies constrained by application throughput. A generic parallelism reduction methodology is introduced. Experimental results ...

  4. Risk-Constrained Microgrid Reconfiguration Using Group Sparsity

    OpenAIRE

    Dall'Anese, Emiliano; Giannakis, Georgios B.

    2013-01-01

    The system reconfiguration task is considered for existing power distribution systems and microgrids, in the presence of renewable-based generation and load foresting errors. The system topology is obtained by solving a chance-constrained optimization problem, where loss-of-load (LOL) constraints and Ampacity limits of the distribution lines are enforced. Similar to various distribution system reconfiguration renditions, solving the resultant problem is computationally prohibitive due to the ...

  5. Juno radio science observations to constrain Jupiter's moment of inertia

    Science.gov (United States)

    Le Maistre, S.; Folkner, W. M.; Jacobson, R. A.

    2015-10-01

    Through detailed and realistic numerical simulations, the present study assesses the precision with which Juno can measure the normalized polar moment of inertia (MOI) of Jupiter. Based on Ka-band Doppler and range data, this analysis shows that the determination of the precession rate of Jupiter is by far more efficient than the previously proposed Lense-Thirring effect to determine the moment of inertia and therefore to constrain the internal structure of the giant planet with Juno.

  6. LINEAR SYSTEMS ASSOCIATED WITH NUMERICAL METHODS FOR CONSTRAINED OPITMIZATION

    Institute of Scientific and Technical Information of China (English)

    Y. Yuan

    2003-01-01

    Linear systems associated with numerical methods for constrained optimization arediscussed in this paper. It is shown that the corresponding subproblems arise in most well-known methods, no matter line search methods or trust region methods for constrainedoptimization can be expressed as similar systems of linear equations. All these linearsystems can be viewed as some kinds of approximation to the linear system derived by theLagrange-Newton method. Some properties of these linear systems are analyzed.

  7. Constraining the MOdified Newtonian Dynamics from spherically symmetrical hydrodynamic accretion

    OpenAIRE

    Roy, Nirupam

    2011-01-01

    The MOdified Newtonian Dynamics (MOND) is an alternative to the dark matter assumption that can explain the observed flat rotation curve of galaxies. Here hydrodynamic accretion is considered to critically check the consistency and to constrain the physical interpretation of this theory. It is found that, in case of spherically symmetrical hydrodynamic accretion, the modified Euler's equation has real solution if the interpretation is assumed to be a modification of the law of dynamics. There...

  8. Constrained basin stability for studying transient phenomena in dynamical systems

    OpenAIRE

    van Kan, Adrian; Jegminat, Jannes; Donges, Jonathan; Kurths, Jürgen

    2016-01-01

    Transient dynamics are of large interest in many areas of science. Here, a generalization of basin stability (BS) is presented: constrained basin stability (CBS) that is sensitive to various different types of transients arising from finite size perturbations. CBS is applied to the paradigmatic Lorenz system for uncovering nonlinear precursory phenomena of a boundary crisis bifurcation. Further, CBS is used in a model of the Earth's carbon cycle as a return time-dependent stability measure of...

  9. Control of the constrained planar simple inverted pendulum

    Science.gov (United States)

    Bavarian, B.; Wyman, B. F.; Hemami, H.

    1983-01-01

    Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.

  10. Optimal Constrained Resource Allocation Strategies under Low Risk Circumstances

    OpenAIRE

    Andreica, Mugurel Ionut; Andreica, Madalina; Visan, Costel

    2009-01-01

    The computational geometry problems studied in this paper were inspired by tasks from the International Olympiad in Informatics (some of which were personally attended by the authors). The attached archive contains task descriptions, authors' solutions, as well as some official solutions of the tasks. International audience In this paper we consider multiple constrained resource allocation problems, where the constraints can be specified by formulating activity dependency restrictions o...

  11. Effect of FIR Fluxes on Constraining Properties of YSOs

    OpenAIRE

    Ha, Ji-Sung; Lee, Jeong-Eun; Jeong, Woong-Seob

    2010-01-01

    Young Stellar Objects (YSOs) in the early evolutionary stages are very embedded, and thus they emit most of their energy at long wavelengths such as far-infrared (FIR) and submillimeter (Submm). Therefore, the FIR observational data are very important to classify the accurate evolutionary stages of these embedded YSOs, and to better constrain their physical parameters in the dust continuum modeling. We selected 28 YSOs, which were detected in the AKARI Far-Infrared Surveyor (FIS), from the Sp...

  12. Experimentally Constrained Molecular Relaxation: The case of hydrogenated amorphous silicon

    OpenAIRE

    Biswas, Parthapratim; Atta-Fynn, Raymond; Drabold, David A.

    2007-01-01

    We have extended our experimentally constrained molecular relaxation technique (P. Biswas {\\it et al}, Phys. Rev. B {\\bf 71} 54204 (2005)) to hydrogenated amorphous silicon: a 540-atom model with 7.4 % hydrogen and a 611-atom model with 22 % hydrogen were constructed. Starting from a random configuration, using physically relevant constraints, {\\it ab initio} interactions and the experimental static structure factor, we construct realistic models of hydrogenated amorphous silicon. Our models ...

  13. Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Cordero-Carrion, Isabel; Ibanez, Jose MarIa [Departamento de Astronomia y Astrofisica, Universidad de Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Valencia (Spain); Cerda-Duran, Pablo, E-mail: isabel.cordero@uv.e, E-mail: cerda@mpa-garching.mpg.d, E-mail: jose.m.ibanez@uv.e [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Strasse 1, D-85741 Garching (Germany)

    2010-05-01

    This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.

  14. School locations and vacancies: a constrained logit equilibrium model

    OpenAIRE

    Martínez, Francisco J.; Loreto Tamblay; Andrés Weintraub

    2011-01-01

    A partial static competitive equilibrium theory is presented and the corresponding constrained logit model specified for a given scenario of policies, which yields the expected equilibrium locations, prices of schools, and students’ school choices. Rational students differentiated by socioeconomic cluster demand vacancies at different schools after assessing the school quality, price, and transport costs. Students also interact among them by their valuation of who attends each school (a consu...

  15. Multi-sensory integration by constrained self-organization

    OpenAIRE

    Lefort, Mathieu; Boniface, Yann; Girau, Bernard

    2010-01-01

    We develop on a model for multi-sensory integration to perform sensorimotor tasks. The aim of the model is to provide missing modality recall and generalization using cortico-inspired mechanisms. The architecture consists in several multilevel cortical maps with a generic structure. Each map has to self organize with a continuous, decentralized and unsupervised learning which provides robustness and adaptability. These self-organizations are constrained by the multi modal context to obtain mu...

  16. Learning Nonrigid Deformations for Constrained Multi-modal Image Registration

    OpenAIRE

    Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon

    2013-01-01

    We present a new strategy to constrain nonrigid registrations of multi-modal images using a low-dimensional statistical deformation model and test this in registering pre-operative and post-operative images from epilepsy patients. For those patients who may undergo surgical resection for treatment, the current gold-standard to identify regions of seizure involves craniotomy and implantation of intracranial electrodes. To guide surgical resection, surgeons utilize pre-op anat...

  17. A Model-Driven Engineering Framework for Constrained Model Search

    OpenAIRE

    Kleiner, Mathias

    2009-01-01

    This document describes a formalization, a solver-independant methodology and implementation alternatives for realizing constrained model search in a model-driven engineering framework. The proposed approach combines model-driven engineering tools ((meta)model transformations, models to text, text to models) and constraint programming techniques. Based on previous research, motivations to model search are first introduced together with objectives and background context. A theory of model sear...

  18. Do solar neutrinos constrain the electromagnetic properties of the neutrino?

    OpenAIRE

    Friedland, Alexander

    2005-01-01

    It is of great interest whether the recent KamLAND bound on the flux of electron antineutrinos from the Sun constrains the electromagnetic properties of the neutrino. We examine the efficiency of the electron antineutrino production in the solar magnetic fields, assuming the neutrinos are Majorana particles with a relatively large transition moment. We consider fields both in the radiative and convective zones of the Sun, with physically plausible strengths, and take into account the recently...

  19. Accumulation of stress in constrained assemblies: novel Satoh test configuration

    OpenAIRE

    Shirzadi, A. A.; Bhadeshia, H. K. D. H.

    2010-01-01

    A common test used to study the response of a transforming material to external constraint is due to Satoh and involves the cooling of a rigidly constrained tensile specimen while monitoring the stress that accumulates. Such tests are currently common in the invention of welding alloys which on phase transformation lead to a reduction in residual stresses in the final assembly. The test suffers from the fact that the whole of the tensile specimen is not maintained at a uniform temperature, ma...

  20. Constraining cosmological ultra-large scale structure using numerical relativity

    OpenAIRE

    Braden, Jonathan; Johnson, Matthew C.; Peiris, Hiranya V.; Aguirre, Anthony

    2016-01-01

    Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation...