WorldWideScience

Sample records for articulatorily constrained maximum

  1. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  2. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  3. Resource-constrained maximum network throughput on space networks

    Institute of Scientific and Technical Information of China (English)

    Yanling Xing; Ning Ge; Youzheng Wang

    2015-01-01

    This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.

  4. Exploring the Constrained Maximum Edge-weight Connected Graph Problem

    Institute of Scientific and Technical Information of China (English)

    Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen

    2009-01-01

    Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.

  5. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays.

    Science.gov (United States)

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  6. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  7. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    Science.gov (United States)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  8. A practical computational framework for the multidimensional moment-constrained maximum entropy principle

    Science.gov (United States)

    Abramov, Rafail

    2006-01-01

    The maximum entropy principle is a versatile tool for evaluating smooth approximations of probability density functions with a least bias beyond given constraints. In particular, the moment-based constraints are often a common prior information about a statistical state in various areas of science, including that of a forecast ensemble or a climate in atmospheric science. With that in mind, here we present a unified computational framework for an arbitrary number of phase space dimensions and moment constraints for both Shannon and relative entropies, together with a practical usable convex optimization algorithm based on the Newton method with an additional preconditioning and robust numerical integration routine. This optimization algorithm has already been used in three studies of predictability, and so far was found to be capable of producing reliable results in one- and two-dimensional phase spaces with moment constraints of up to order 4. The current work extensively references those earlier studies as practical examples of the applicability of the algorithm developed below.

  9. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    Science.gov (United States)

    Iden, Sascha C.; Peters, Andre; Durner, Wolfgang

    2015-11-01

    The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.

  10. Constrained wormholes

    International Nuclear Information System (INIS)

    The large wormhole problem in Coleman's theory of the cosmological constant is presented in the framework of constrained wormholes. We use semi-classical methods, similar to those used to study constrained instantons in quantum field theory. A scalar field theory serves as a toy model to analyze the problems associated with large constrained instantons. In particular, these large instantons are found to suffer from large quantum fluctuations. In gravity we find the same situation: large quantum fluctuations around large wormholes. In both cases we expect that these large fluctuations are a signal that large constrained solutions are not important in the path integral. Thus, we argue that only small wormholes are important in Coleman's theory. (orig.)

  11. Constrained Appropriations

    DEFF Research Database (Denmark)

    Wildermuth, Norbert

    2008-01-01

    their practices of meaning making and consumption are realised ‘under conditions which are not of their own choosing', but constrained by the wider relations of economic and political power which shape their lives. Based, primarily, on the results of my qualitative empirical field research in Recife, I will show...

  12. Maximum Fidelity

    CERN Document Server

    Kinkhabwala, Ali

    2013-01-01

    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  13. The inverse maximum dynamic flow problem

    Institute of Scientific and Technical Information of China (English)

    BAGHERIAN; Mehri

    2010-01-01

    We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

  14. Constrained Stochastic Extended Redundancy Analysis.

    Science.gov (United States)

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  15. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  16. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  17. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  18. Constrained Jastrow calculations

    International Nuclear Information System (INIS)

    An alternative to Pandharipande's lowest order constrained variational prescription for dense Fermi fluids is presented which is justified on both physical and strict variational grounds. Excellent results are obtained when applied to the 'homework problem' of Bethe, in sharp contrast to those obtained from the Pandharipande prescription. (Auth.)

  19. Constrained superfields in Supergravity

    CERN Document Server

    Dall'Agata, Gianguido

    2015-01-01

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  20. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.;

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...

  1. Constraining entropic cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Koivisto, Tomi S. [Institute for Theoretical Physics and the Spinoza Institute, Utrecht University, Leuvenlaan 4, Postbus 80.195, 3508 TD Utrecht (Netherlands); Mota, David F. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zumalacárregui, Miguel, E-mail: t.s.koivisto@uu.nl, E-mail: d.f.mota@astro.uio.no, E-mail: miguelzuma@icc.ub.edu [Institute of Cosmos Sciences (ICC-IEEC), University of Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)

    2011-02-01

    It has been recently proposed that the interpretation of gravity as an emergent, entropic phenomenon might have nontrivial implications to cosmology. Here several such approaches are investigated and the underlying assumptions that must be made in order to constrain them by the BBN, SneIa, BAO and CMB data are clarified. Present models of inflation or dark energy are ruled out by the data. Constraints are derived on phenomenological parameterizations of modified Friedmann equations and some features of entropic scenarios regarding the growth of perturbations, the no-go theorem for entropic inflation and the possible violation of the Bekenstein bound for the entropy of the Universe are discussed and clarified.

  2. Lectures on Constrained Systems

    CERN Document Server

    Date, Ghanashyam

    2010-01-01

    These lecture notes were prepared as a basic introduction to the theory of constrained systems which is how the fundamental forces of nature appear in their Hamiltonian formulation. Only a working knowledge of Lagrangian and Hamiltonian formulation of mechanics is assumed. These notes are based on the set of eight lectures given at the {\\em Refresher Course for College Teachers} held at IMSc during May-June, 2005. These are submitted to the arxiv for an easy access to a wider body of students.

  3. Symmetrically Constrained Compositions

    CERN Document Server

    Beck, Matthias; Lee, Sunyoung; Savage, Carla D

    2009-01-01

    Given integers $a_1, a_2, ..., a_n$, with $a_1 + a_2 + ... + a_n \\geq 1$, a symmetrically constrained composition $\\lambda_1 + lambda_2 + ... + lambda_n = M$ of $M$ into $n$ nonnegative parts is one that satisfies each of the the $n!$ constraints ${\\sum_{i=1}^n a_i \\lambda_{\\pi(i)} \\geq 0 : \\pi \\in S_n}$. We show how to compute the generating function of these compositions, combining methods from partition theory, permutation statistics, and lattice-point enumeration.

  4. Maximum Autocorrelation Factorial Kriging

    OpenAIRE

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete

    2000-01-01

    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  5. Maximum power demand cost

    International Nuclear Information System (INIS)

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some

  6. Space Constrained Dynamic Covering

    CERN Document Server

    Antonellis, Ioannis; Dughmi, Shaddin

    2009-01-01

    In this paper, we identify a fundamental algorithmic problem that we term space-constrained dynamic covering (SCDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SCDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X={1,2,...,n} and I={S_1, ..., S_m}, S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SCDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), sometimes even as little as O((m+n)polylog(mn)) space. The goal of SCDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We present algorithms a...

  7. Density constrained TDHF

    CERN Document Server

    Oberacker, V E

    2015-01-01

    In this manuscript we provide an outline of the numerical methods used in implementing the density constrained time-dependent Hartree-Fock (DC-TDHF) method and provide a few examples of its application to nuclear fusion. In this approach, dynamic microscopic calculations are carried out on a three-dimensional lattice and there are no adjustable parameters, the only input is the Skyrme effective NN interaction. After a review of the DC-TDHF theory and the numerical methods, we present results for heavy-ion potentials $V(R)$, coordinate-dependent mass parameters $M(R)$, and precompound excitation energies $E^{*}(R)$ for a variety of heavy-ion reactions. Using fusion barrier penetrabilities, we calculate total fusion cross sections $\\sigma(E_\\mathrm{c.m.})$ for reactions between both stable and neutron-rich nuclei. We also determine capture cross sections for hot fusion reactions leading to the formation of superheavy elements.

  8. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  9. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    \\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  10. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...

  11. Early Cosmology Constrained

    CERN Document Server

    Verde, Licia; Pigozzo, Cassio; Heavens, Alan F; Jimenez, Raul

    2016-01-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the $\\Lambda$CDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95\\% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter $\\Omega_{\\rm MR} < 0.006$ and extra radiation parameterised as extra effective neutrino species $2.3 < N_{\\rm eff} < 3.2$ when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond $\\Lambda$CDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way ...

  12. Maximum information photoelectron metrology

    CERN Document Server

    Hockett, P; Wollenhaupt, M; Baumert, T

    2015-01-01

    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  13. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  14. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  15. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  16. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  17. Antiprotons at Solar Maximum

    CERN Document Server

    Bieber, J W; Engel, R; Gaisser, T K; Roesler, S; Stanev, T; Bieber, John W.; Engel, Ralph; Gaisser, Thomas K.; Roesler, Stefan; Stanev, Todor

    1999-01-01

    New measurements with good statistics will make it possible to observe the time variation of cosmic antiprotons at 1 AU through the approaching peak of solar activity. We report a new computation of the interstellar antiproton spectrum expected from collisions between cosmic protons and the interstellar gas. This spectrum is then used as input to a steady-state drift model of solar modulation, in order to provide predictions for the antiproton spectrum as well as the antiproton/proton ratio at 1 AU. Our model predicts a surprisingly large, rapid increase in the antiproton/proton ratio through the next solar maximum, followed by a large excursion in the ratio during the following decade.

  18. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  19. Turbulence as a constrained system

    CERN Document Server

    Mendes, A C R; Takakura, F I

    2000-01-01

    Hydrodynamic turbulence is studied as a constrained system from the point of view of metafluid dynamics. We present a Lagrangian description for this new theory of turbulence inspired from the analogy with electromagnetism. Consequently it is a gauge theory. This new approach to the study of turbulence tends to renew the optimism to solve the difficult problem of turbulence. As a constrained system, turbulence is studied in the Dirac and Faddeev-Jackiw formalisms giving the Dirac brackets. An important result is that we show these brackets are the same in and out of the inertial range, giving the way to quantize turbulence.

  20. Constrained Graph Optimization: Interdiction and Preservation Problems

    Energy Technology Data Exchange (ETDEWEB)

    Schild, Aaron V [Los Alamos National Laboratory

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  1. Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model

    Directory of Open Access Journals (Sweden)

    Martin Mihelich

    2014-02-01

    Full Text Available The asymmetric simple exclusion process (ASEP has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.

  2. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino;

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  3. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  4. Impulsive differential inclusions with constrains

    Directory of Open Access Journals (Sweden)

    Tzanko Donchev

    2006-05-01

    Full Text Available In the paper, we study weak invariance of differential inclusions with non-fixed time impulses under compactness type assumptions. When the right-hand side is one sided Lipschitz an extension of the well known relaxation theorem is proved. In this case also necessary and sufficient condition for strong invariance of upper semi continuous systems are obtained. Some properties of the solution set of impulsive system (without constrains in appropriate topology are investigated.

  5. Constrained ballistics and geometrical optics

    OpenAIRE

    Epstein, Marcelo

    2014-01-01

    The problem of constant-speed ballistics is studied under the umbrella of non-linear non-holonomic constrained systems. The Newtonian approach is shown to be equivalent to the use of Chetaev's rule to incorporate the constraint within the initially unconstrained formulation. Although the resulting equations are not, in principle, obtained from a variational statement, it is shown that the trajectories coincide with those of geometrical optics in a medium with a suitably chosen refractive inde...

  6. Enumeration of Maximum Acyclic Hypergraphs

    Institute of Scientific and Technical Information of China (English)

    Jian-fang Wang; Hai-zhu Li

    2002-01-01

    Acyclic hypergraphs are analogues of forests in graphs. They are very useful in the design of databases. In this article, the maximum size of an acyclic hypergraph is determined and the number of maximum r-uniform acyclic hypergraphs of order n is shown to be ( n t-1 )(n(r-1)-r2 +2r)n-r-1.

  7. Constraining Lorentz violation with cosmology.

    Science.gov (United States)

    Zuntz, J A; Ferreira, P G; Zlosnik, T G

    2008-12-31

    The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities. PMID:19113765

  8. Maximum-entropy probability distributions under Lp-norm constraints

    Science.gov (United States)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  9. Quantum Annealing for Constrained Optimization

    Science.gov (United States)

    Hen, Itay; Spedalieri, Federico M.

    2016-03-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that promise to solve certain combinatorial optimization problems of practical relevance faster than their classical analogues. The applicability of such devices for many theoretical and real-world optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which, in turn, requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here, we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby reduces the number of required couplers and removes the need for minor embedding, greatly reducing the number of required physical qubits. We argue the advantages of the proposed technique and illustrate its effectiveness. We conclude by discussing the experimental feasibility of the suggested method as well as its potential to appreciably reduce the resource requirements for implementing optimization problems on quantum annealers and its significance in the field of quantum computing.

  10. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds on the...... capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  11. Constraining Cosmic Evolution of Type Ia Supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  12. Time efficient spacecraft maneuver using constrained torque distribution

    Science.gov (United States)

    Cao, Xibin; Yue, Chengfei; Liu, Ming; Wu, Baolin

    2016-06-01

    This paper investigates the time efficient maneuver of rigid satellites with inertia uncertainty and bounded external disturbance. A redundant cluster of four reaction wheels is used to control the spacecraft. To make full use of the controllability and avoid frequent unload for reaction wheels, a maximum output torque and maximum angular momentum constrained torque distribution method is developed. Based on this distribution approach, the maximum allowable acceleration and velocity of the satellite are optimized during the maneuvering. A novel braking curve is designed on the basis of the optimization strategy of the control torque distribution. A quaternion-based sliding mode control law is proposed to render the state to track the braking curve strictly. The designed controller provides smooth control torque, time efficiency and high control precision. Finally, practical numerical examples are illustrated to show the effectiveness of the developed torque distribution strategy and control methodology.

  13. Constraining CO emission estimates using atmospheric observations

    Science.gov (United States)

    Hooghiemstra, P. B.

    2012-06-01

    (mainly CO from oxidation of NMVOCs) that are 185 Tg CO/yr higher compared to the stations-only inversion. Second, MOPITT-only derived biomass burning emissions are reduced with respect to the prior which is in contrast to previous (inverse) modeling studies. Finally, MOPITT derived total emissions are significantly higher for South America and Africa compared to the stations-only inversion. This is likely due to a positive bias in the MOPITT V4 product. This bias is also apparent from validation with surface stations and ground-truth FTIR columns. In the final study we present the first inverse modeling study to estimate CO emissions constrained by both surface (NOAA) and satellite (MOPITT) observations using a bias correction scheme. This approach leads to the identification of a positive bias of maximum 5 ppb in MOPITT column-averaged CO mixing ratios in the remote Southern Hemisphere (SH). The 4D-Var system is used to estimate CO emissions over South America in the period 2006-2010 and to analyze the interannual variability (IAV) of these emissions. We infer robust, high spatial resolution CO emission estimates that show slightly smaller IAV due to fires compared to the Global Fire Emissions Database (GFED3) prior emissions. Moreover, CO emissions probably associated with pre-harvest burning of sugar cane plantations are underestimated in current inventories by 50-100%.

  14. iBGP and Constrained Connectivity

    CERN Document Server

    Dinitz, Michael

    2011-01-01

    We initiate the theoretical study of the problem of minimizing the size of an iBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than $\\Omega(\\log n)$ and provide approximation algorithms with ratio $\\tilde{O}(\\sqrt{n})$. In addition, we give a slightly worse $\\tilde{O}(n^{2/3})$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems. The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph $G=(V,E)$, and for every pair of vertices $u,v \\in V$ we are given a set $S(u,...

  15. Generalized Maximum Entropy Estimation of Discrete Sequential Move Games of Perfect Information

    OpenAIRE

    Wang, Yafeng; Graham, Brett

    2013-01-01

    We propose a data-constrained generalized maximum entropy (GME) estimator for discrete sequential move games of perfect information which can be easily implemented on optimization software with high-level interfaces such as GAMS. Unlike most other work on the estimation of complete information games, the method we proposed is data constrained and does not require simulation and normal distribution of random preference shocks. We formulate the GME estimation as a (convex) mixed-integer nonline...

  16. Constrained Allocation Flux Balance Analysis

    CERN Document Server

    Mori, Matteo; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferr...

  17. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  18. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system

  19. Decomposition using Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2002-01-01

    , normally we have an ordering of landmarks (variables) along the contour of the objects. For the case with observation ordering the maximum autocorrelation factor (MAF) transform was proposed for multivariate imagery in\\verb+~+\\$\\backslash\\$cite{switzer85}. This corresponds to a R-mode analyse of the data...

  20. Constraining Modified Gravity Theories With Cosmology

    OpenAIRE

    Martinelli, Matteo

    2012-01-01

    We study and constrain the Hu and Sawicki f(R) model using CMB and weak lensing forecasted data. We also use the same data to constrain extended theories of gravity and the subclass of f(R) theories using a general parameterization describing departures from General Relativity. Moreover we study and constrain also a Dark Coupling model where Dark Energy and Dark Matter are coupled toghether.

  1. Space-Constrained Interval Selection

    OpenAIRE

    Emek, Yuval; Halldorsson, Magnus M.; Rosen, Adi

    2012-01-01

    We study streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A deterministic 2-approximation streaming algorithm for this problem is developed, together with an algorithm for the special case of proper intervals, achieving improved approximation ratio of 3/2. We complement these upper bounds by proving that they are essentially best possible in the streaming setting: it is shown that an approximation ratio of $2 - \\e...

  2. Constraining the Europa Neutral Torus

    Science.gov (United States)

    Smith, Howard T.; Mitchell, Donald; mauk, Barry; Johnson, Robert E.; clark, george

    2016-10-01

    "Neutral tori" consist of neutral particles that usually co-orbit along with their source forming a toroidal (or partial toroidal) feature around the planet. The distribution and composition of these features can often provide important, if not unique, insight into magnetospheric particles sources, mechanisms and dynamics. However, these features can often be difficult to directly detect. One innovative method for detecting neutral tori is by observing Energetic Neutral Atoms (ENAs) that are generally considered produced as a result of charge exchange interactions between charged and neutral particles.Mauk et al. (2003) reported the detection of a Europa neutral particle torus using ENA observations. The presence of a Europa torus has extremely large implications for upcoming missions to Jupiter as well as understanding possible activity at this moon and providing critical insight into what lies beneath the surface of this icy ocean world. However, ENAs can also be produced as a result of charge exchange interactions between two ionized particles and in that case cannot be used to infer the presence of neutral particle population. Thus, a detailed examination of all possible source interactions must be considered before one can confirm that likely original source population of these ENA images is actually a Europa neutral particle torus. For this talk, we examine the viability that the Mauk et al. (2003) observations were actually generated from a neutral torus emanating from Europa as opposed to charge particle interactions with plasma originating from Io. These results help constrain such a torus as well as Europa source processes.

  3. Gyrification from constrained cortical expansion

    CERN Document Server

    Tallinen, Tuomas; Biggins, John S; Mahadevan, L

    2015-01-01

    The exterior of the mammalian brain - the cerebral cortex - has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highl...

  4. Constrained Allocation Flux Balance Analysis

    Science.gov (United States)

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  5. Maximum Power Point Regulator System

    Science.gov (United States)

    Simola, J.; Savela, K.; Stenberg, J.; Tonicello, F.

    2011-10-01

    The target of the study done under the ESA contract No.17830/04/NL/EC (GSTP4) for Maximum Power Point Regulator System (MPPRS) was to investigate, design and test a modular power system (a core PCU) fulfilling requirement for maximum power transfer even after a single failure in the Power System by utilising a power concept without any potential and credible single point failure. The studied MPPRS concept is of a modular construction, able to track the MPP individually on each SA sections, maintaining its functionality and full power capability after a loss of a complete MPPR module (by utilizingN+1module).Various add-on DCDC converter topology candidates were investigated and redundancy, failure mechanisms and protection aspects were studied

  6. Maximum Genus of Strong Embeddings

    Institute of Scientific and Technical Information of China (English)

    Er-ling Wei; Yan-pei Liu; Han Ren

    2003-01-01

    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  7. D(Maximum)=P(Argmaximum)

    CERN Document Server

    Remizov, Ivan D

    2009-01-01

    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  8. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  9. Homogeneous determination of maximum magnitude

    OpenAIRE

    Meletti, C.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; D'Amico, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; Martinelli, F.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia

    2010-01-01

    This deliverable represents the result of the activities performed by a working group at INGV. The main object of the Task 3.5 is defined in the Description of Work. This task will produce a homogeneous assessment (possibly multiple models) of the distribution of the expected Maximum Magnitude for earthquakes expected in various tectonic provinces of Europe, to serve as input for the computation and validation of seismic hazard. This goal will be achieved by combining input from earthqu...

  10. Maximum matching on random graphs

    OpenAIRE

    Zhou, Haijun; Ou-Yang, Zhong-Can

    2003-01-01

    The maximum matching problem on random graphs is studied analytically by the cavity method of statistical physics. When the average vertex degree \\mth{c} is larger than \\mth{2.7183}, groups of max-matching patterns which differ greatly from each other {\\em gradually} emerge. An analytical expression for the max-matching size is also obtained, which agrees well with computer simulations. Discussion is made on this {\\em continuous} glassy phase transition and the absence of such a glassy phase ...

  11. Indistinguishability, symmetrisation and maximum entropy

    International Nuclear Information System (INIS)

    It is demonstrated that the distributions over single-particle states for Boltzmann, Bose-Einstein and Fermi-Dirac statistics describing N non-interacting identical particles follow directly from the principle of maximum entropy. It is seen that the notions of indistinguishability and coarse graining are secondary, if not irrelevant. A detailed examination of the structure of the Boltzmann limit is provided. (author)

  12. The Testability of Maximum Magnitude

    Science.gov (United States)

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.

    2012-12-01

    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  13. Solar maximum: solar array degradation

    International Nuclear Information System (INIS)

    The 5-year in-orbit power degradation of the silicon solar array aboard the Solar Maximum Satellite was evaluated. This was the first spacecraft to use Teflon R FEP as a coverglass adhesive, thus avoiding the necessity of an ultraviolet filter. The peak power tracking mode of the power regulator unit was employed to ensure consistent maximum power comparisons. Telemetry was normalized to account for the effects of illumination intensity, charged particle irradiation dosage, and solar array temperature. Reference conditions of 1.0 solar constant at air mass zero and 301 K (28 C) were used as a basis for normalization. Beginning-of-life array power was 2230 watts. Currently, the array output is 1830 watts. This corresponds to a 16 percent loss in array performance over 5 years. Comparison of Solar Maximum Telemetry and predicted power levels indicate that array output is 2 percent less than predictions based on an annual 1.0 MeV equivalent election fluence of 2.34 x ten to the 13th power square centimeters space environment

  14. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  15. Determination of optimal gains for constrained controllers

    Energy Technology Data Exchange (ETDEWEB)

    Kwan, C.M.; Mestha, L.K.

    1993-08-01

    In this report, we consider the determination of optimal gains, with respect to a certain performance index, for state feedback controllers where some elements in the gain matrix are constrained to be zero. Two iterative schemes for systematically finding the constrained gain matrix are presented. An example is included to demonstrate the procedures.

  16. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  17. Economics and Maximum Entropy Production

    Science.gov (United States)

    Lorenz, R. D.

    2003-04-01

    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  18. Constrained Deformable-Layer Tomography

    Science.gov (United States)

    Zhou, H.

    2006-12-01

    The improvement on traveltime tomography depends on improving data coverage and tomographic methodology. The data coverage depends on the spatial distribution of sources and stations, as well as the extent of lateral velocity variation that may alter the raypaths locally. A reliable tomographic image requires large enough ray hit count and wide enough angular range between traversing rays over the targeted anomalies. Recent years have witnessed the advancement of traveltime tomography in two aspects. One is the use of finite frequency kernels, and the other is the improvement on model parameterization, particularly that allows the use of a priori constraints. A new way of model parameterization is the deformable-layer tomography (DLT), which directly inverts for the geometry of velocity interfaces by varying the depths of grid points to achieve a best traveltime fit. In contrast, conventional grid or cell tomography seeks to determine velocity values of a mesh of fixed-in-space grids or cells. In this study, the DLT is used to map crustal P-wave velocities with first arrival data from local earthquakes and two LARSE active surveys in southern California. The DLT solutions along three profiles are constrained using known depth ranges of the Moho discontinuity at 21 sites from a previous receiver function study. The DLT solutions are generally well resolved according to restoration resolution tests. The patterns of 2D DLT models of different profiles match well at their intersection locations. In comparison with existing 3D cell tomography models in southern California, the new DLT models significantly improve the data fitness. In comparison with the multi-scale cell tomography conducted for the same data, while the data fitting levels of the DLT and the multi-scale cell tomography models are compatible, the DLT provides much higher vertical resolution and more realistic description of the undulation of velocity discontinuities. The constraints on the Moho depth

  19. A Nonsmooth Maximum Principle for Optimal Control Problems with State and Mixed Constraints-Convex Case

    OpenAIRE

    Biswas, Md. Haider Ali; de Pinho, Maria do Rosario

    2013-01-01

    Here we derive a nonsmooth maximum principle for optimal control problems with both state and mixed constraints. Crucial to our development is a convexity assumption on the "velocity set". The approach consists of applying known penalization techniques for state constraints together with recent results for mixed constrained problems.

  20. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...... the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions...

  1. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  2. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  3. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  4. Constrained school choice : an experimental study

    OpenAIRE

    Calsamiglia, Caterina; Haeringer, Guillaume; Klijn, Flip

    2008-01-01

    The literature on school choice assumes that families can submit a preference list over all the schools they want to be assigned to. However, in many real-life instances families are only allowed to submit a list containing a limited number of schools. Subjects' incentives are drastically affected, as more individuals manipulate their preferences. Including a safety school in the constrained list explains most manipulations. Competitiveness across schools plays an important role. Constraining...

  5. Maximum Matchings via Glauber Dynamics

    CERN Document Server

    Jindal, Anant; Pal, Manjish

    2011-01-01

    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  6. The maximum drag reduction asymptote

    Science.gov (United States)

    Choueiri, George H.; Hof, Bjorn

    2015-11-01

    Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re < 100. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].

  7. The maximum drag reduction asymptote

    Science.gov (United States)

    Choueiri, George H.; Hof, Bjorn

    2015-11-01

    Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].

  8. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    F W Giacobbe

    2003-03-01

    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  9. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  10. A constrained two-layer compression technique for ECG waves.

    Science.gov (United States)

    Byun, Kyungguen; Song, Eunwoo; Shim, Hwan; Lim, Hyungjoon; Kang, Hong-Goo

    2015-08-01

    This paper proposes a constrained two-layer compression technique for electrocardiogram (ECG) waves, of which encoded parameters can be directly used for the diagnosis of arrhythmia. In the first layer, a single ECG beat is represented by one of the registered templates in the codebook. Since the required coding parameter in this layer is only the codebook index of the selected template, its compression ratio (CR) is very high. Note that the distribution of registered templates is also related to the characteristics of ECG waves, thus it can be used as a metric to detect various types of arrhythmias. The residual error between the input and the selected template is encoded by a wavelet-based transform coding in the second layer. The number of wavelet coefficients is constrained by pre-defined maximum distortion to be allowed. The MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. The proposed algorithm shows around 7.18 CR when the reference value of percentage root mean square difference (PRD) is set to ten. PMID:26737691

  11. Hybrid Biogeography Based Optimization for Constrained Numerical and Engineering Optimization

    Directory of Open Access Journals (Sweden)

    Zengqiang Mi

    2015-01-01

    Full Text Available Biogeography based optimization (BBO is a new competitive population-based algorithm inspired by biogeography. It simulates the migration of species in nature to share information. A new hybrid BBO (HBBO is presented in the paper for constrained optimization. By combining differential evolution (DE mutation operator with simulated binary crosser (SBX of genetic algorithms (GAs reasonably, a new mutation operator is proposed to generate promising solution instead of the random mutation in basic BBO. In addition, DE mutation is still integrated to update one half of population to further lead the evolution towards the global optimum and the chaotic search is introduced to improve the diversity of population. HBBO is tested on twelve benchmark functions and four engineering optimization problems. Experimental results demonstrate that HBBO is effective and efficient for constrained optimization and in contrast with other state-of-the-art evolutionary algorithms (EAs, the performance of HBBO is better, or at least comparable in terms of the quality of the final solutions and computational cost. Furthermore, the influence of the maximum mutation rate is also investigated.

  12. Constraining Ceres' interior from its Rotational Motion

    CERN Document Server

    Rambaux, Nicolas; Dehant, Véronique; Kuchynka, Petr

    2011-01-01

    Context. Ceres is the most massive body of the asteroid belt and contains about 25 wt.% (weight percent) of water. Understanding its thermal evolution and assessing its current state are major goals of the Dawn Mission. Constraints on internal structure can be inferred from various observations. Especially, detailed knowledge of the rotational motion can help constrain the mass distribution inside the body, which in turn can lead to information on its geophysical history. Aims. We investigate the signature of the interior on the rotational motion of Ceres and discuss possible future measurements performed by the spacecraft Dawn that will help to constrain Ceres' internal structure. Methods. We compute the polar motion, precession-nutation, and length-of-day variations. We estimate the amplitudes of the rigid and non-rigid response for these various motions for models of Ceres interior constrained by recent shape data and surface properties. Results. As a general result, the amplitudes of oscillations in the r...

  13. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki;

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of...... constrained orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  14. Towards weakly constrained double field theory

    Science.gov (United States)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  15. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  16. Towards Weakly Constrained Double Field Theory

    CERN Document Server

    Lee, Kanghoon

    2015-01-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X- ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  17. Constrained optimization of gradient waveforms for generalized diffusion encoding

    Science.gov (United States)

    Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans

    2015-12-01

    Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences.

  18. Geometric constrained variational calculus. III: The second variation (Part II)

    Science.gov (United States)

    Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico

    2016-03-01

    The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.

  19. Constrained optimization of gradient waveforms for generalized diffusion encoding.

    Science.gov (United States)

    Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans

    2015-12-01

    Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences. PMID:26583528

  20. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  1. Constraining Initial Vacuum by CMB Data

    CERN Document Server

    Chandra, Debabrata

    2016-01-01

    We demonstrate how one can possibly constrain the initial vacuum using CMB data. Using a generic vacuum without any particular choice a priori, thereby keeping both the Bogolyubov coefficients in the analysis, we compute observable parameters from two- and three-point correlation functions. We are thus left with constraining four model parameters from the two complex Bogolyubov coefficients. We also demonstrate a method of finding out the constraint relations between the Bogolyubov coefficients using the theoretical normalization condition and observational data of power spectrum and bispectrum from CMB. We also discuss the possible pros and cons of the analysis.

  2. Constrained instanton and black hole creation

    Institute of Scientific and Technical Information of China (English)

    WU; Zhongchao; XU; Donghui

    2004-01-01

    A gravitational instanton is considered as the seed for the creation of a universe. However, there exist too few instantons. To include many interesting phenomena in the framework of quantum cosmology, the concept of constrained gravitational instanton is inevitable. In this paper we show how a primordial black hole is created from a constrained instanton. The quantum creation of a generic black hole in the closed or open background is completely resolved. The relation of the creation scenario with gravitational thermodynamics and topology is discussed.

  3. Library Support for Resource Constrained Accelerators

    DEFF Research Database (Denmark)

    Brock-Nannestad, Laust; Karlsson, Sven

    2014-01-01

    Accelerators, and other resource constrained systems, are increasingly being used in computer systems. Accelerators provide power efficient performance and often provide a shared memory model. However, it is a challenge to map feature rich APIs, such as OpenMP, to resource constrained systems....... In this paper, we present a lightweight system where an accelerator can remotely execute library functions on a host processor. The implementation takes up 750 bytes but can replace arbitrary library calls leading to significant savings in memory foot print. We evaluate with a set of SPLASH-2 applications...

  4. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238U neutron cross sections in the unresolved resonance region. (orig.)

  5. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  6. General Relativity as a constrained Gauge Theory

    OpenAIRE

    Cianci, R.; Vignolo, S.; Bruno, D

    2006-01-01

    The formulation of General Relativity presented in math-ph/0506077 and the Hamiltonian formulation of Gauge theories described in math-ph/0507001 are made to interact. The resulting scheme allows to see General Relativity as a constrained Gauge theory.

  7. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement...

  8. PRICING AND HEDGING OPTION UNDER PORTFOLIO CONSTRAINED

    Institute of Scientific and Technical Information of China (English)

    魏刚; 陈世平

    2001-01-01

    The authors employ convex analysis and stochastic control approach to study the question of hedging contingent claims with portfolio constrained to take values in a given closed, convex subset of RK, and extend the results of Gianmario Tessitore and Jerzy Zabczyk[6] on pricing options in multiasset and multinominal model.

  9. Constrained Optimization in Simulation : A Novel Approach

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; van Beers, W.C.M.; van Nieuwenhuyse, I.

    2008-01-01

    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulati

  10. Nonlinear wave equations and constrained harmonic motion

    OpenAIRE

    Deift, Percy; Lund, Fernando; Trubowitz, Eugene

    1980-01-01

    The study of the Korteweg-deVries, nonlinear Schrödinger, Sine-Gordon, and Toda lattice equations is simply the study of constrained oscillators. This is likely to be true for any nonlinear wave equation associated with a second-order linear problem.

  11. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  12. Double-sided fuzzy chance-constrained linear fractional programming approach for water resources management

    Science.gov (United States)

    Cui, Liang; Li, Yongping; Huang, Guohe

    2016-06-01

    A double-sided fuzzy chance-constrained fractional programming (DFCFP) method is developed for planning water resources management under uncertainty. In DFCFP the system marginal benefit per unit of input under uncertainty can also be balanced. The DFCFP is applied to a real case of water resources management in the Zhangweinan River Basin, China. The results show that the amounts of water allocated to the two cities (Anyang and Handan) would be different under minimum and maximum reliability degrees. It was found that the marginal benefit of the system solved by DFCFP is bigger than the system benefit under the minimum and maximum reliability degrees, which not only improve economic efficiency in the mass, but also remedy water deficiency. Compared with the traditional double-sided fuzzy chance-constrained programming (DFCP) method, the solutions obtained from DFCFP are significantly higher, and the DFCFP has advantages in water conservation.

  13. Positivity-Preserving Finite Difference WENO Schemes with Constrained Transport for Ideal Magnetohydrodynamic Equations

    OpenAIRE

    Christlieb, Andrew J.; Liu, Yuan; Tang, Qi; Xu, Zhengfu

    2014-01-01

    In this paper, we utilize the maximum-principle-preserving flux limiting technique, originally designed for high order weighted essentially non-oscillatory (WENO) methods for scalar hyperbolic conservation laws, to develop a class of high order positivity-preserving finite difference WENO methods for the ideal magnetohydrodynamic (MHD) equations. Our schemes, under the constrained transport (CT) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of...

  14. Estimation of Maximum Wind Speeds in Tornadoes

    OpenAIRE

    Dergarabedian, Paul; Fendell, Francis

    2011-01-01

    A method is proposed for rapidly estimating the maximum value of the azimuthal velocity component (maximum swirling speed) in tornadoes and waterspouts. The method requires knowledge of the cloud-deck height and a photograph of the funnel cloud—data usually available. Calculations based on this data confirm that the lower maximum wind speeds suggested by recent workers (roughly one-quarter of the sonic speed for sea-level air) are more plausible for tornadoes than the sonic speed sometimes ci...

  15. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  16. Maximum mass, moment of inertia and compactness of relativistic stars

    Science.gov (United States)

    Breu, Cosima; Rezzolla, Luciano

    2016-06-01

    A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as `universal relations' and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalized Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, Mmax, simply in terms of the maximum mass of the non-rotating configuration, M_{_TOV}, finding that M_max ≃ (1.203 ± 0.022) M_{_TOV} for all the equations of state we have considered. We further introduce an improvement to previously published universal relations by Lattimer & Schutz between the dimensionless moment of inertia and the stellar compactness, which could provide an accurate tool to constrain the equation of state of nuclear matter when measurements of the moment of inertia become available.

  17. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  18. Global marine primary production constrains fisheries catches.

    Science.gov (United States)

    Chassot, Emmanuel; Bonhommeau, Sylvain; Dulvy, Nicholas K; Mélin, Frédéric; Watson, Reg; Gascuel, Didier; Le Pape, Olivier

    2010-04-01

    Primary production must constrain the amount of fish and invertebrates available to expanding fisheries; however the degree of limitation has only been demonstrated at regional scales to date. Here we show that phytoplanktonic primary production, estimated from an ocean-colour satellite (SeaWiFS), is related to global fisheries catches at the scale of Large Marine Ecosystems, while accounting for temperature and ecological factors such as ecosystem size and type, species richness, animal body size, and the degree and nature of fisheries exploitation. Indeed we show that global fisheries catches since 1950 have been increasingly constrained by the amount of primary production. The primary production appropriated by current global fisheries is 17-112% higher than that appropriated by sustainable fisheries. Global primary production appears to be declining, in some part due to climate variability and change, with consequences for the near future fisheries catches.

  19. Doubly Constrained Robust Blind Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Xin Song

    2013-01-01

    Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.

  20. Efficient caching for constrained skyline queries

    DEFF Research Database (Denmark)

    Mortensen, Michael Lind; Chester, Sean; Assent, Ira;

    2015-01-01

    Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints...... are unlikely to match a cached query exactly, our proposed method identifies and exploits similar cached queries to reduce the computational overhead of subsequent ones. We consider interactive users posing a string of similar queries and show how these can be classified into four cases based on how...... they overlap cached queries. For each we present a specialized solution. For the general case of independent users, we introduce the Missing Points Region (MPR), that minimizes disk reads, and an approximation of the MPR. An extensive experimental evaluation reveals that the querying for an (approximate) MPR...

  1. Constraining the braneworld with gravitational wave observations.

    Science.gov (United States)

    McWilliams, Sean T

    2010-04-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm. PMID:20481929

  2. Constraining RRc candidates using SDSS colours

    CERN Document Server

    Bányai, E; Molnár, L; Dobos, L; Szabó, R

    2016-01-01

    The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.

  3. NEW SIMULATED ANNEALING ALGORITHMS FOR CONSTRAINED OPTIMIZATION

    OpenAIRE

    LINET ÖZDAMAR; CHANDRA SEKHAR PEDAMALLU

    2010-01-01

    We propose a Population based dual-sequence Non-Penalty Annealing algorithm (PNPA) for solving the general nonlinear constrained optimization problem. The PNPA maintains a population of solutions that are intermixed by crossover to supply a new starting solution for simulated annealing throughout the search. Every time the search gets stuck at a local optimum, this crossover procedure is triggered and simulated annealing search re-starts from a new subspace. In both the crossover and simulate...

  4. NTRU software implementation for constrained devices

    OpenAIRE

    Monteverde Giacomino, Mariano

    2008-01-01

    The NTRUEncrypt is a public-key cryptosystem based on the shortest vector problem. Its main characteristics are the low memory and computational requirements while providing a high security level. This document presents an implementation and optimization of the NTRU public-key cryptosys- tem for constrained devices. Speci cally the NTRU cryptosystem has been implemented on the ATMega128 and the ATMega163 microcontrollers. This has turned in a major e ort in order to reduce t...

  5. Performance Characteristics of Active Constrained Layer Damping

    OpenAIRE

    A. Baz; J. Ro

    1995-01-01

    Theoretical and experimental performance characteristics of the new class of actively controlled constrained layer damping (ACLD) are presented. The ACLD consists of a viscoelastic damping layer sandwiched between two layers of piezoelectric sensor and actuator. The composite ACLD when bonded to a vibrating structure acts as a “smart” treatment whose shear deformation can be controlled and tuned to the structural response in order to enhance the energy dissipation mechanism and improve the vi...

  6. Murder and Self-constrained Modernity

    DEFF Research Database (Denmark)

    Hansen, Kim Toft

    with metaphysical assumptions. These trends are as well in evidence in the writings of the Swedish author Henning Mankell where he opens up discussions of inapproachable violence – a certain type of violence that he designates ‘the Swedish uneasiness’ – especially the brief short story “Sprickan” (“The Fracture......-constrained. The meeting ground between modernity and religion is, then, a metaphysics of uncertainty....

  7. Constrained simulation of the Bullet Cluster

    International Nuclear Information System (INIS)

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.

  8. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  9. Cosmicflows Constrained Local UniversE Simulations

    CERN Document Server

    Sorce, Jenny G; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M; Steinmetz, Matthias; Tully, R Brent; Pomarede, Daniel; Carlesi, Edoardo

    2015-01-01

    This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observatio...

  10. An English language interface for constrained domains

    Science.gov (United States)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  11. Constrained Multi-View Video Face Clustering.

    Science.gov (United States)

    Cao, Xiaochun; Zhang, Changqing; Zhou, Chengju; Fu, Huazhu; Foroosh, Hassan

    2015-11-01

    In this paper, we focus on face clustering in videos. To promote the performance of video clustering by multiple intrinsic cues, i.e., pairwise constraints and multiple views, we propose a constrained multi-view video face clustering method under a unified graph-based model. First, unlike most existing video face clustering methods which only employ these constraints in the clustering step, we strengthen the pairwise constraints through the whole video face clustering framework, both in sparse subspace representation and spectral clustering. In the constrained sparse subspace representation, the sparse representation is forced to explore unknown relationships. In the constrained spectral clustering, the constraints are used to guide for learning more reasonable new representations. Second, our method considers both the video face pairwise constraints as well as the multi-view consistence simultaneously. In particular, the graph regularization enforces the pairwise constraints to be respected and the co-regularization penalizes the disagreement among different graphs of multiple views. Experiments on three real-world video benchmark data sets demonstrate the significant improvements of our method over the state-of-the-art methods. PMID:26259245

  12. 20 CFR 229.48 - Family maximum.

    Science.gov (United States)

    2010-04-01

    ... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

  13. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu;

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  14. The maximum rotation of a galactic disc

    NARCIS (Netherlands)

    Bottema, R

    1997-01-01

    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously

  15. 13 CFR 130.440 - Maximum grant.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum grant. 130.440 Section 130.440 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS DEVELOPMENT CENTERS § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of...

  16. Duality of Maximum Entropy and Minimum Divergence

    Directory of Open Access Journals (Sweden)

    Shinto Eguchi

    2014-06-01

    Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

  17. Comparison of selection schemes for evolutionary constrained optimization

    NARCIS (Netherlands)

    Kemenade, C.H.M. van

    1996-01-01

    Evolutionary algorithms simulate the process of evolution in order to evolve solutions to optimization problems. An interesting domain of application is to solve numerical constrained optimization problems. We introduce a simple constrained optimization problem with scalable dimension, adjustable co

  18. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes....... Numerical results for the capacities are presented....

  19. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888.3780 Section 888.3780 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  20. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device intended... generic type of device includes prostheses that consist of a single flexible across-the-joint...

  1. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  2. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  3. Can Neutron stars constrain Dark Matter?

    DEFF Research Database (Denmark)

    Kouvaris, Christoforos; Tinyakov, Peter

    2010-01-01

    We argue that observations of old neutron stars can impose constraints on dark matter candidates even with very small elastic or inelastic cross section, and self-annihilation cross section. We find that old neutron stars close to the galactic center or in globular clusters can maintain a surface...... temperature that could in principle be detected. Due to their compactness, neutron stars can acrete WIMPs efficiently even if the WIMP-to-nucleon cross section obeys the current limits from direct dark matter searches, and therefore they could constrain a wide range of dark matter candidates....

  4. Constraining Milky Way mass with Hypervelocity Stars

    CERN Document Server

    Fragione, Giacomo

    2016-01-01

    We show that hypervelocity stars (HVSs) ejected from the center of the Milky Way galaxy can be used to constrain the mass of its halo. The asymmetry in the radial velocity distribution of halo stars due to escaping HVSs depends on the halo potential (escape speed) as long as the round trip orbital time is shorter than the stellar lifetime. Adopting a characteristic HVS travel time of $300$ Myr, which corresponds to the average mass of main sequence HVSs ($3.2$ M$_{\\odot}$), we find that current data favors a mass for the Milky Way in the range $(1.2$-$1.7)\\times 10^{12} \\mathrm{M}_\\odot$.

  5. Incomplete Dirac reduction of constrained Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandre, C., E-mail: chandre@cpt.univ-mrs.fr

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  6. QCD strings as constrained grassmannian sigma model

    CERN Document Server

    Viswanathan, K S; Viswanathan, K S; Parthasarathy, R

    1995-01-01

    We present calculations for the effective action of string world sheet in R3 and R4 utilizing its correspondence with the constrained Grassmannian sigma model. Minimal surfaces describe the dynamics of open strings while harmonic surfaces describe that of closed strings. The one-loop effective action for these are calculated with instanton and anti-instanton background, reprsenting N-string interactions at the tree level. The effective action is found to be the partition function of a classical modified Coulomb gas in the confining phase, with a dynamically generated mass gap.

  7. Incomplete Dirac reduction of constrained Hamiltonian systems

    International Nuclear Information System (INIS)

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified

  8. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    of geographically distributed resources connected through an optical network work together for solving large problems. A number of heuristics are proposed along with an exact solution approach based on Dantzig-Wolfe decomposition. The latter has some performance difficulties while the heuristics solve all instances......This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  9. Constrained inflaton due to a complex scalar

    Energy Technology Data Exchange (ETDEWEB)

    Budhi, Romy H. S. [Physics Department, Gadjah Mada University,Yogyakarta 55281 (Indonesia); Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Kashiwase, Shoichi; Suematsu, Daijiro [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan)

    2015-09-14

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.

  10. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  11. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  12. Lepton Flavour Violation in the Constrained MSSM with Constrained Sequential Dominance

    CERN Document Server

    Antusch, Stefan

    2008-01-01

    We consider charged Lepton Flavour Violation (LFV) in the Constrained Minimal Supersymmetric Standard Model, extended to include the see-saw mechanism with Constrained Sequential Dominance (CSD), where CSD provides a natural see-saw explanation of tri-bimaximal neutrino mixing. When charged lepton corrections to tri-bimaximal neutrino mixing are included, we discover characteristic correlations among the LFV branching ratios, depending on the mass ordering of the right-handed neutrinos, with a pronounced dependence on the leptonic mixing angle $\\theta_{13}$ (and in some cases also on the Dirac CP phase $\\delta$).

  13. The maximum entropy technique. System's statistical description

    CERN Document Server

    Belashev, B Z

    2002-01-01

    The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered

  14. Probalistic logic programming under maximum entropy

    OpenAIRE

    Lukasiewicz, Thomas; Kern-Isberner, Gabriele

    1999-01-01

    In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a main result of this paper, we introduce an efficient technique for approximative p...

  15. Remarks on the maximum correlation coefficient

    OpenAIRE

    Dembo, Amir; Kagan, Abram; Shepp, Lawrence A.

    2001-01-01

    The maximum correlation coefficient between partial sums of independent and identically distributed random variables with finite second moment equals the classical (Pearson) correlation coefficient between the sums, and thus does not depend on the distribution of the random variables. This result is proved, and relations between the linearity of regression of each of two random variables on the other and the maximum correlation coefficient are discussed.

  16. MAXIMUM GENUS, INDEPENDENCE NUMBER AND GIRTH

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    It is known (for example see [2]) that the maximum genus of a graph is mainly determined by the Betti deficiency of the graph. In this paper, the authors establish an upper bound on the Betti deficiency in terms of the independence number as well as the girth of a graph, and thus use the formulation in [2] to translate this result to lower bound on the maximum genus.Meantime it is shown that both of the bounds are best possible.

  17. Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression

    OpenAIRE

    Bera, A. K.; Galvao Jr, A. F.; Montes-Rojas, G.; Park, S. Y.

    2010-01-01

    This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters toge...

  18. SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH

    Directory of Open Access Journals (Sweden)

    Pandya A M

    2011-04-01

    Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70

  19. The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms

    Institute of Scientific and Technical Information of China (English)

    HE Zhong-qiu; LI Dao-ben

    2003-01-01

    This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1~3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.

  20. Constraining dark matter through 21-cm observations

    Science.gov (United States)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  1. Constraining the Braking Indices of Magnetars

    CERN Document Server

    Gao, Z F; Wang, N; Yuan, J P; Peng, Q H; Du, Y J

    2015-01-01

    Due to the lack of long term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index $n$ of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of $n$ of nine magnetars with SNRs, and find that they cluster in a range of $1\\sim$41. Six magnetars have smaller braking indices of $13$ for other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with $13$ within the updated magneto-thermal evolution models. We point out that there could be some connections between the magnetar's anti-glitch event and its braking index, and the magnitude of $n$ should be taken into account when explaining the event. Although the constrained range of the magnetars' braking indices is tentative, our method provides an effective way to constrain the magnetars' braking indices if th...

  2. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  3. Constraining the braneworld with gravitational wave observations

    CERN Document Server

    McWilliams, Sean T

    2009-01-01

    Braneworld models containing large extra dimensions may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model, the asymptotic AdS radius of curvature of the extra dimension supports a single bound state of the massless graviton on the brane, thereby avoiding gross violations of Newton's law. However, one possible consequence of this model is an enormous increase in the amount of Hawking radiation emitted by black holes. This consequence has been employed by other authors to attempt to constrain the AdS radius of curvature through the observation of black holes. I present two novel methods for constraining the AdS curvature. The first method results from the effect of this enhanced mass loss on the event rate for extreme mass ratio inspirals (EMRIs) detected by the space-based LISA interferometer. The second method results from the observation of an individually resolvable galactic black hole binary with LISA. I show that the ...

  4. Nonstationary sparsity-constrained seismic deconvolution

    Science.gov (United States)

    Sun, Xue-Kai; Sam, Zandong Sun; Xie, Hui-Wen

    2014-12-01

    The Robinson convolution model is mainly restricted by three inappropriate assumptions, i.e., statistically white reflectivity, minimum-phase wavelet, and stationarity. Modern reflectivity inversion methods (e.g., sparsity-constrained deconvolution) generally attempt to suppress the problems associated with the first two assumptions but often ignore that seismic traces are nonstationary signals, which undermines the basic assumption of unchanging wavelet in reflectivity inversion. Through tests on reflectivity series, we confirm the effects of nonstationarity on reflectivity estimation and the loss of significant information, especially in deep layers. To overcome the problems caused by nonstationarity, we propose a nonstationary convolutional model, and then use the attenuation curve in log spectra to detect and correct the influences of nonstationarity. We use Gabor deconvolution to handle nonstationarity and sparsity-constrained deconvolution to separating reflectivity and wavelet. The combination of the two deconvolution methods effectively handles nonstationarity and greatly reduces the problems associated with the unreasonable assumptions regarding reflectivity and wavelet. Using marine seismic data, we show that correcting nonstationarity helps recover subtle reflectivity information and enhances the characterization of details with respect to the geological record.

  5. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  6. Constraining the halo mass function with observations

    Science.gov (United States)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-08-01

    The abundances of dark matter halos in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the halo mass function. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  7. Maximum permissible voltage of YBCO coated conductors

    Science.gov (United States)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.

    2014-06-01

    Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  8. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  9. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    Science.gov (United States)

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  10. A constrained-transport magnetohydrodynamics algorithm with near-spectral resolution

    CERN Document Server

    Maron, Jason; Oishi, Jeffrey

    2007-01-01

    Numerical simulations including magnetic fields have become important in many fields of astrophysics. Evolution of magnetic fields by the constrained transport algorithm preserves magnetic divergence to machine precision, and thus represents one preferred method for the inclusion of magnetic fields in simulations. We show that constrained transport can be implemented with volume-centered fields and hyperresistivity on a high-order finite difference stencil. Additionally, the finite-difference coefficients can be tuned to enhance high-wavenumber resolution. Similar techniques can be used for the interpolations required for dealiasing corrections at high wavenumber. Together, these measures yield an algorithm with a wavenumber resolution that approaches the theoretical maximum achieved by spectral algorithms. Because this algorithm uses finite differences instead of fast Fourier transforms, it runs faster and isn't restricted to periodic boundary conditions. Also, since the finite differences are spatially loca...

  11. Maximum-Bandwidth Node-Disjoint Paths

    Directory of Open Access Journals (Sweden)

    Mostafa H. Dahshan

    2012-03-01

    Full Text Available This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP. The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time

  12. Maximum likelihood based classification of electron tomographic data.

    Science.gov (United States)

    Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan

    2011-01-01

    Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.

  13. Reconstructing the history of dark energy using maximum entropy

    CERN Document Server

    Zunckel, C

    2007-01-01

    We present a Bayesian technique based on a maximum entropy method to reconstruct the dark energy equation of state $w(z)$ in a non--parametric way. This MaxEnt technique allows to incorporate relevant prior information while adjusting the degree of smoothing of the reconstruction in response to the structure present in the data. After demonstrating the method on synthetic data, we apply it to current cosmological data, separately analysing type Ia supernovae measurement from the HST/GOODS program and the first year Supernovae Legacy Survey (SNLS), complemented by cosmic microwave background and baryonic acoustic oscillations data. We find that the SNLS data are compatible with $w(z) = -1$ at all redshifts $0 \\leq z \\lsim 1100$, with errorbars of order 20% for the most constraining choice of priors and model. The HST/GOODS data exhibit a slight (about $1\\sigma$ significance) preference for $w>-1$ at $z\\sim 0.5$ and a drift towards $w>-1$ at larger redshifts, which however is not robust with respect to changes ...

  14. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.

  15. A Constrained Tectonics Model for Coronal Heating

    CERN Document Server

    Ng, C S; 10.1086/525518

    2011-01-01

    An analytical and numerical treatment is given of a constrained version of the tectonics model developed by Priest, Heyvaerts, & Title [2002]. We begin with an initial uniform magnetic field ${\\bf B} = B_0 \\hat{\\bf z}$ that is line-tied at the surfaces $z = 0$ and $z = L$. This initial configuration is twisted by photospheric footpoint motion that is assumed to depend on only one coordinate ($x$) transverse to the initial magnetic field. The geometric constraints imposed by our assumption precludes the occurrence of reconnection and secondary instabilities, but enables us to follow for long times the dissipation of energy due to the effects of resistivity and viscosity. In this limit, we demonstrate that when the coherence time of random photospheric footpoint motion is much smaller by several orders of magnitude compared with the resistive diffusion time, the heating due to Ohmic and viscous dissipation becomes independent of the resistivity of the plasma. Furthermore, we obtain scaling relations that su...

  16. Constraining decaying dark matter with neutron stars

    CERN Document Server

    Perez-Garcia, M Angeles

    2015-01-01

    We propose that the existing population of neutron stars in the galaxy can help constrain the nature of decaying dark matter. The amount of decaying dark matter, accumulated in the central regions in neutron stars and the energy deposition rate from decays, may set a limit on the neutron star survival rate against transitions to more compact stars and, correspondingly, on the dark matter particle decay time, $\\tau_{\\chi}$. We find that for lifetimes ${\\tau_{\\chi}}\\lesssim 6.3\\times 10^{15}$ s, we can exclude particle masses $(m_{\\chi}/ \\rm TeV) \\gtrsim 50$ or $(m_{\\chi}/ \\rm TeV) \\gtrsim 8 \\times 10^2$ in the bosonic and fermionic cases, respectively. In addition, we also compare our findings with the present status of allowed phase space regions using kinematical variables for decaying dark matter, obtaining complementary results.

  17. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... pertaining to the communication model when the resources that can be reordered have binary values. The capacity result is valid under arbitrary error model in which errors in each resource (packet) occur independently. Inspired by the information—theoretic analysis, we have shown how to design practical......This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...

  18. Capacity constrained assignment in spatial databases

    DEFF Research Database (Denmark)

    U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos;

    2008-01-01

    Given a point set P of customers (e.g., WiFi receivers) and a point set Q of service providers (e.g., wireless access points), where each q 2 Q has a capacity q.k, the capacity constrained assignment (CCA) is a matching M Q × P such that (i) each point q 2 Q (p 2 P) appears at most k times (at most......, the quality of q's service to p in a given (q, p) pair is anti-proportional to their distance. Although max-flow algorithms are applicable to this problem, they require the complete distance-based bipartite graph between Q and P. For large spatial datasets, this graph is expensive to compute and it may be too...

  19. Scheduling of resource-constrained projects

    CERN Document Server

    Klein, Robert

    2000-01-01

    Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...

  20. Remote gaming on resource-constrained devices

    Science.gov (United States)

    Reza, Waazim; Kalva, Hari; Kaufman, Richard

    2010-08-01

    Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.

  1. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both...

  2. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  3. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.;

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used or...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  4. Maximum Entropy Learning with Deep Belief Networks

    Directory of Open Access Journals (Sweden)

    Payton Lin

    2016-07-01

    Full Text Available Conventionally, the maximum likelihood (ML criterion is applied to train a deep belief network (DBN. We present a maximum entropy (ME learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited.

  5. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  6. Maximum confidence measurements via probabilistic quantum cloning

    Institute of Scientific and Technical Information of China (English)

    Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu

    2013-01-01

    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.

  7. Maximum Possible Transverse Velocity in Special Relativity.

    Science.gov (United States)

    Medhekar, Sarang

    1991-01-01

    Using a physical picture, an expression for the maximum possible transverse velocity and orientation required for that by a linear emitter in special theory of relativity has been derived. A differential calculus method is also used to derive the expression. (Author/KR)

  8. Instance Optimality of the Adaptive Maximum Strategy

    NARCIS (Netherlands)

    L. Diening; C. Kreuzer; R. Stevenson

    2016-01-01

    In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e

  9. Maximum phonation time: variability and reliability.

    Science.gov (United States)

    Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W

    2010-05-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.

  10. Maximum Phonation Time: Variability and Reliability

    NARCIS (Netherlands)

    R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings

    2010-01-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v

  11. Maximum likelihood estimation of search costs

    NARCIS (Netherlands)

    Moraga González, José; Wildenbeest, Matthijs R.

    2008-01-01

    In a recent paper Hong and Shum [2006. Using price distributions to estimate search costs. Rand Journal of Economics 37, 257-275] present a structural method to estimate search cost distributions. We extend their approach to the case of oligopoly and present a new maximum likelihood method to estima

  12. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  13. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  14. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.;

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  15. Results of the maximum genus of graphs

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper,we provide a new class of up-embeddable graphs,and obtain a tight lower bound on the maximum genus of a class of 2-connected pseudographs of diameter 2 and of a class of diameter 4 multi-graphs.This extends a result of (S)koviera.

  16. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the...

  17. Comparing maximum pressures in internal combustion engines

    Science.gov (United States)

    Sparrow, Stanwood W; Lee, Stephen M

    1922-01-01

    Thin metal diaphragms form a satisfactory means for comparing maximum pressures in internal combustion engines. The diaphragm is clamped between two metal washers in a spark plug shell and its thickness is chosen such that, when subjected to explosion pressure, the exposed portion will be sheared from the rim in a short time.

  18. The 2011 Northern Hemisphere Solar Maximum

    Science.gov (United States)

    Altrock, Richard C.

    2013-01-01

    Altrock (1997, Solar Phys. 170, 411) discusses a process in which Fe XIV 530.3 nm emission features appear at high latitudes and gradually migrate towards the equator, merging with the sunspot "butterfly diagram". In cycles 21 - 23 solar maximum occurred when the number of Fe XIV emission regions per day > 0.19 (averaged over 365 days and both hemispheres) first reached latitudes 18°, 21° and 21°, for an average of 20° ± 1.7°. Another high-latitude process is the "Rush to the Poles" of polar crown prominences and their associated coronal emission, including Fe XIV. The Rush is a harbinger of solar maximum (cf. Altrock, 2003, Solar Phys. 216, 343). Solar maximum in cycles 21 - 23 occurred when the center line of the Rush reached a critical latitude. These latitudes were 76°, 74° and 78°, respectively, for an average of 76° ± 2°. Cycle 24 displays an intermittent Rush that is only well-defined in the northern hemisphere. In 2009 an initial slope of 4.6°/yr was found in the north, compared to an average of 9.4 ± 1.7 °/yr in the previous three cycles. However, in 2010 the slope increased to 7.5°/yr. Extending that rate to 76° ± 2° indicates that the solar maximum smoothed sunspot number in the northern hemisphere already occurred at 2011.6 ± 0.3. In the southern hemisphere the Rush is very poorly defined. A linear fit to several maxima would reach 76° in the south at 2014.2. In 1999, persistent Fe XIV coronal emission connected with the ESC appeared near 70° in the north and began migrating towards the equator at a rate 40% slower than the previous two solar cycles. A fit to the early ESC would not reach 20° until 2019.8. However, in 2009 and 2010 an acceleration occurred. Currently the greatest number of emission regions is at 21° in the north and 24°in the south. This indicates that solar maximum is occurring now in the north but not yet in the south. The latest global smoothed sunspot numbers show an inflection point in late 2011, which

  19. Maximum entropy distribution of stock price fluctuations

    Science.gov (United States)

    Bartiromo, Rosario

    2013-04-01

    In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.

  20. Hard Instances of the Constrained Discrete Logarithm Problem

    OpenAIRE

    Mironov, Ilya; Mityagin, Anton; Nissim, Kobbi

    2006-01-01

    The discrete logarithm problem (DLP) generalizes to the constrained DLP, where the secret exponent $x$ belongs to a set known to the attacker. The complexity of generic algorithms for solving the constrained DLP depends on the choice of the set. Motivated by cryptographic applications, we study sets with succinct representation for which the constrained DLP is hard. We draw on earlier results due to Erd\\"os et al. and Schnorr, develop geometric tools such as generalized Menelaus' theorem for ...

  1. Transient stability-constrained optimal power flow

    OpenAIRE

    Bettiol, Arlan; Ruiz-Vega, Daniel; Ernst, Damien; Wehenkel, Louis; Pavella, Mania

    1999-01-01

    This paper proposes a new approach able to maximize the interface flow limits in power systems and to find a new operating state that is secure with respect to both, dynamic (transient stability) and static security constraints. It combines the Maximum Allowable Transfer (MAT) method, recently developed for the simultaneous control of a set of contingencies, and an Optimal Power Flow (OPF) method for maximizing the interface power flow. The approach and its performances are illustrated by ...

  2. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  3. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  4. Zipf's law, power laws, and maximum entropy

    CERN Document Server

    Visser, Matt

    2012-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  5. Zipf's law, power laws and maximum entropy

    Science.gov (United States)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  6. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  7. Nonparametric Maximum Entropy Estimation on Information Diagrams

    CERN Document Server

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn

    2016-01-01

    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  8. Maximum speed of dewetting on a fiber

    CERN Document Server

    Chan, Tak Shing; Snoeijer, Jacco H

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed of dewetting. For all radii we find the maximum speed occurs at vanishing apparent contact angle. To further investigate the transition we numerically determine the bifurcation diagram for steady menisci. It is found that the meniscus profiles on thick fibers are smooth, even when there is a film deposited between the bath and the contact line, while profiles on thin fibers exhibit strong oscillations. We discuss how this could lead to different experimental scenarios of film deposition.

  9. Maximum Profit Configurations of Commercial Engines

    OpenAIRE

    Yiran Chen

    2011-01-01

    An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...

  10. Nonlocal maximum principles for active scalars

    CERN Document Server

    Kiselev, Alexander

    2010-01-01

    Active scalars appear in many problems of fluid dynamics. The most common examples of active scalar equations are 2D Euler, Burgers, and 2D surface quasi-geostrophic equations. Many questions about regularity and properties of solutions of these equations remain open. We develop the idea of nonlocal maximum principle, formulating a more general criterion and providing new applications. The most interesting application is finite time regularization of weak solutions in the supercritical regime.

  11. Maximum Estrada Index of Bicyclic Graphs

    CERN Document Server

    Wang, Long; Wang, Yi

    2012-01-01

    Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.

  12. A Convnet for Non-maximum Suppression

    OpenAIRE

    Hosang, J.; Benenson, R.; Schiele, B.

    2015-01-01

    Non-maximum suppression (NMS) is used in virtually all state-of-the-art object detection pipelines. While essential object detection ingredients such as features, classifiers, and proposal methods have been extensively researched surprisingly little work has aimed to systematically address NMS. The de-facto standard for NMS is based on greedy clustering with a fixed distance threshold, which forces to trade-off recall versus precision. We propose a convnet designed to perform NMS of a given s...

  13. Maximum privacy without coherence, zero-error

    Science.gov (United States)

    Leung, Debbie; Yu, Nengkun

    2016-09-01

    We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.

  14. Dynamic Programming, Maximum Principle and Vintage Capital

    OpenAIRE

    Fabbri, Giorgio; Iacopetta, Maurizio

    2007-01-01

    We present an application of the Dynamic Programming (DP) and of the Maximum Principle (MP) to solve an optimization over time when the production function is linear in the stock of capital (Ak model). Two views of capital are considered. In one, which is embraced by the great majority of macroeconomic models, capital is homogeneous and depreciates at a constant exogenous rate. In the other view each piece of capital has its own finite productive life cycle (vintage capital). The interpretatio...

  15. The Maximum Principle for Replicator Equations

    OpenAIRE

    K. Sigmund

    1984-01-01

    By introducing a non-Euclidean metric on the unit simplex, it is possible to identify an interesting class of gradient systems within the ubiquitous "replicator equations" of evolutionary biomathematics. In the case of homogeneous potentials, this leads to maximum principles governing the increase of the average fitness, both in population genetics and in chemical kinetics. This research was carried out as part of the Dynamics of Macrosystems Feasibility Study in the System and Decision ...

  16. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion

    Science.gov (United States)

    Karabanov, A.; Wiśniewski, D.; Lesanovsky, I.; Köckenberger, W.

    2015-07-01

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP—so-called solid effect DNP—which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications.

  17. Constraining the Oblateness of Kepler Planets

    CERN Document Server

    Zhu, Wei; Zhou, George; Lin, D N C

    2014-01-01

    We use Kepler short cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside of the solar system, measuring an oblateness of $0.22 \\pm 0.11$ for the 18 $M_J$ mass brown dwarf Kepler 39b (KOI-423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI-686.01, and KOI-197.01 to be < 0.067, < 0.251, and < 0.186, respectively. Using the Q'-values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI-686.01 and KOI-197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI-423.01 (Kepler 39b) suggests that the Q'-value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants ...

  18. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  19. Constrained length minimum inductance gradient coil design.

    Science.gov (United States)

    Chronik, B A; Rutt, B K

    1998-02-01

    A gradient coil design algorithm capable of controlling the position of the homogeneous region of interest (ROI) with respect to the current-carrying wires is required for many advanced imaging and spectroscopy applications. A modified minimum inductance target field method that allows the placement of a set of constraints on the final current density is presented. This constrained current minimum inductance method is derived in the context of previous target field methods. Complete details are shown and all equations required for implementation of the algorithm are given. The method has been implemented on computer and applied to the design of both a 1:1 aspect ratio (length:diameter) central ROI and a 2:1 aspect ratio edge ROI gradient coil. The 1:1 design demonstrates that a general analytic method can be used to easily obtain very short gradient coil designs for use with specialized magnet systems. The edge gradient design demonstrates that designs that allow imaging of the neck region with a head sized gradient coil can be obtained, as well as other applications requiring edge-of-cylinder regions of uniformity.

  20. Joint Chance-Constrained Dynamic Programming

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  1. Testing constrained sequential dominance models of neutrinos

    Science.gov (United States)

    Björkeroth, Fredrik; King, Stephen F.

    2015-12-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.

  2. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion.

    Science.gov (United States)

    Karabanov, A; Wiśniewski, D; Lesanovsky, I; Köckenberger, W

    2015-07-10

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP-so-called solid effect DNP-which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications. PMID:26207453

  3. Constraining Binary Stellar Evolution With Pulsar Timing

    Science.gov (United States)

    Ferdman, Robert D.; Stairs, I. H.; Backer, D. C.; Burgay, M.; Camilo, F.; D'Amico, N.; Demorest, P.; Faulkner, A.; Hobbs, G.; Kramer, M.; Lorimer, D. R.; Lyne, A. G.; Manchester, R.; McLaughlin, M.; Nice, D. J.; Possenti, A.

    2006-06-01

    The Parkes Multibeam Pulsar Survey has yielded a significant number of very interesting binary and millisecond pulsars. Two of these objects are part of an ongoing timing study at the Green Bank Telescope (GBT). PSR J1756-2251 is a double-neutron star (DNS) binary system. It is similar to the original Hulse-Taylor binary pulsar system PSR B1913+16 in its orbital properties, thus providing another important opportunity to test the validity of General Relativity, as well as the evolutionary history of DNS systems through mass measurements. PSR J1802-2124 is part of the relatively new and unstudied "intermediate-mass" class of binary system, which typically have spin periods in the tens of milliseconds, and/or relatively massive (> 0.7 solar masses) white dwarf companions. With our GBT observations, we have detected the Shapiro delay in this system, allowing us to constrain the individual masses of the neutron star and white dwarf companion, and thus the mass-transfer history, in this unusual system.

  4. Electricity in a Climate-Constrained World

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    After experiencing a historic drop in 2009, electricity generation reached a record high in 2010, confirming the close linkage between economic growth and electricity usage. Unfortunately, CO2 emissions from electricity have also resumed their growth: Electricity remains the single-largest source of CO2 emissions from energy, with 11.7 billion tonnes of CO2 released in 2010. The imperative to 'decarbonise' electricity and improve end-use efficiency remains essential to the global fight against climate change. The IEA’s Electricity in a Climate-Constrained World provides an authoritative resource on progress to date in this area, including statistics related to CO2 and the electricity sector across ten regions of the world (supply, end-use and capacity additions). It also presents topical analyses on the challenge of rapidly curbing CO2 emissions from electricity. Looking at policy instruments, it focuses on emissions trading in China, using energy efficiency to manage electricity supply crises and combining policy instruments for effective CO2 reductions. On regulatory issues, it asks whether deregulation can deliver decarbonisation and assesses the role of state-owned enterprises in emerging economies. And from technology perspectives, it explores the rise of new end-uses, the role of electricity storage, biomass use in Brazil, and the potential of carbon capture and storage for ‘negative emissions’ electricity supply.

  5. Constraining New Physics with D meson decays

    Energy Technology Data Exchange (ETDEWEB)

    Barranco, J.; Delepine, D.; Gonzalez Macias, V. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Lopez-Lozano, L. [Departamento de Física, División de Ciencias e Ingeniería, Universidad de Guanajuato, Campus León, León 37150 (Mexico); Área Académica de Matemáticas y Física, Universidad Autónoma del Estado de Hidalgo, Carr. Pachuca-Tulancingo Km. 4.5, C.P. 42184, Pachuca, HGO (Mexico)

    2014-04-04

    Latest Lattice results on D form factors evaluation from first principles show that the Standard Model (SM) branching ratios prediction for the leptonic D{sub s}→ℓν{sub ℓ} decays and the semileptonic SM branching ratios of the D{sup 0} and D{sup +} meson decays are in good agreement with the world average experimental measurements. It is possible to disprove New Physics hypothesis or find bounds over several models beyond the SM. Using the observed leptonic and semileptonic branching ratios for the D meson decays, we performed a combined analysis to constrain non-standard interactions which mediate the cs{sup ¯}→lν{sup ¯} transition. This is done either by a model-independent way through the corresponding Wilson coefficients or in a model-dependent way by finding the respective bounds over the relevant parameters for some models beyond the Standard Model. In particular, we obtain bounds for the Two Higgs Doublet Model Type-II and Type III, the Left–Right model, the Minimal Supersymmetric Standard Model with explicit R-parity violation and Leptoquarks. Finally, we estimate the transverse polarization of the lepton in the D{sup 0} decay and we found it can be as high as P{sub T}=0.23.

  6. Distributed Constrained Optimization with Semicoordinate Transformations

    Science.gov (United States)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  7. String theory origin of constrained multiplets

    Science.gov (United States)

    Kallosh, Renata; Vercnocke, Bert; Wrase, Timm

    2016-09-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector A μ , three complex scalars ϕ i and four 4d fermions λ 0, λ i. These transform, in addition to the more familiar {N}=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear {N}=1 multiplets: four chiral multiplets S, Y i that satisfy S 2 = SY i =0 and contain the worldvolume fermions λ 0 and λ i ; and four chiral multiplets W α , H i that satisfy S{W}_{α }=S{overline{D}}_{overset{\\cdotp }{α }}{overline{H}}^{overline{imath}}=0 and contain the vector A μ and the scalars ϕ i . We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet Φ that satisfies S(Φ -overline{Φ})=0 , which is particularly interesting for inflationary cosmology.

  8. String Theory Origin of Constrained Multiplets

    CERN Document Server

    Kallosh, Renata; Wrase, Timm

    2016-01-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector $A_\\mu$, three complex scalars $\\phi^i$ and four 4d fermions $\\lambda^0$, $\\lambda^i$. These transform, in addition to the more familiar N=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear N=1 multiplets: four chiral multiplets $S$, $Y^i$ that satisfy $S^2=SY^i=0$ and contain the worldvolume fermions $\\lambda^0$ and $\\lambda^i$; and four chiral multiplets $W_\\alpha$, $H^i$ that satisfy $S W_\\alpha=0$ and $S \\bar D_{\\dot \\alpha} \\bar H^{\\bar \\imath}=0$ and contain the vector $A_\\mu$ and the scalars $\\phi^i$. We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet $\\Phi$ that satisfies $S(\\Phi-\\bar \\Phi)=0$, which is particularly interesting for in...

  9. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John; /CERN /King' s Coll. London; Mustafayev, Azar; /Minnesota U., Theor. Phys. Inst.; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  10. Acoustic characteristics of listener-constrained speech

    Science.gov (United States)

    Ashby, Simone; Cummins, Fred

    2003-04-01

    Relatively little is known about the acoustical modifications speakers employ to meet the various constraints-auditory, linguistic and otherwise-of their listeners. Similarly, the manner by which perceived listener constraints interact with speakers' adoption of specialized speech registers is poorly Hypo (H&H) theory offers a framework for examining the relationship between speech production and output-oriented goals for communication, suggesting that under certain circumstances speakers may attempt to minimize phonetic ambiguity by employing a ``hyperarticulated'' speaking style (Lindblom, 1990). It remains unclear, however, what the acoustic correlates of hyperarticulated speech are, and how, if at all, we might expect phonetic properties to change respective to different listener-constrained conditions. This paper is part of a preliminary investigation concerned with comparing the prosodic characteristics of speech produced across a range of listener constraints. Analyses are drawn from a corpus of read hyperarticulated speech data comprising eight adult, female speakers of English. Specialized registers include speech to foreigners, infant-directed speech, speech produced under noisy conditions, and human-machine interaction. The authors gratefully acknowledge financial support of the Irish Higher Education Authority, allocated to Fred Cummins for collaborative work with Media Lab Europe.

  11. Optimization of constrained density functional theory

    Science.gov (United States)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  12. Constraining the oblateness of Kepler planets

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Wei [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Huang, Chelsea X. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Zhou, George [Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston Creek, ACT 2611 (Australia); Lin, D. N. C., E-mail: weizhu@astronomy.ohio-state.edu [UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States)

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  13. Optimal performance of constrained control systems

    Science.gov (United States)

    Harvey, P. Scott, Jr.; Gavin, Henri P.; Scruggs, Jeffrey T.

    2012-08-01

    This paper presents a method to compute optimal open-loop trajectories for systems subject to state and control inequality constraints in which the cost function is quadratic and the state dynamics are linear. For the case in which inequality constraints are decentralized with respect to the controls, optimal Lagrange multipliers enforcing the inequality constraints may be found at any time through Pontryagin’s minimum principle. In so doing, the set of differential algebraic Euler-Lagrange equations is transformed into a nonlinear two-point boundary-value problem for states and costates whose solution meets the necessary conditions for optimality. The optimal performance of inequality constrained control systems is calculable, allowing for comparison to previous, sub-optimal solutions. The method is applied to the control of damping forces in a vibration isolation system subjected to constraints imposed by the physical implementation of a particular controllable damper. An outcome of this study is the best performance achievable given a particular objective, isolation system, and semi-active damper constraints.

  14. Constraining the halo mass function with observations

    CERN Document Server

    Castro, Tiago; Quartin, Miguel

    2016-01-01

    The abundances of matter halos in the universe are described by the so-called halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints, while both Euclid and J-PAS can give constraints on the HMF parameters which are comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even mo...

  15. Should we still believe in constrained supersymmetry?

    CERN Document Server

    Balázs, Csaba; Carter, Daniel; Farmer, Benjamin; White, Martin

    2012-01-01

    We calculate Bayes factors to quantify how the feasibility of the constrained minimal supersymmetric standard model (CMSSM) has changed in the light of a series of observations. This is done in the Bayesian spirit where probability reflects a degree of belief in a proposition and Bayes' theorem tells us how to update it after acquiring new information. Our experimental baseline is the approximate knowledge that was available before LEP, and our comparison model is the Standard Model with a simple dark matter candidate. To quantify the amount by which experiments have altered our relative belief in the CMSSM since the baseline data we compute the Bayes factors that arise from learning in sequence the LEP Higgs constraints, the XENON100 dark matter constraints, the 2011 LHC supersymmetry search results, and the early 2012 LHC Higgs search results. We find that LEP and the LHC strongly shatter our trust in the CMSSM (with $M_0$ and $M_{1/2}$ below 2 TeV), reducing its posterior odds by a factor of approximately ...

  16. How peer-review constrains cognition

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2015-01-01

    Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far as ‘cognit......Peer-review is neither reliable, fair, nor a valid basis for predicting ‘impact’: as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far...... as ‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio-cognitive...... came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper’s knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline...

  17. Constraining inflation with future galaxy redshift surveys

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhiqi; Vernizzi, Filippo [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette cédex (France); Verde, Licia, E-mail: zhiqi.huang@cea.fr, E-mail: liciaverde@icc.ub.edu, E-mail: filippo.vernizzi@cea.fr [Institute of Sciences of the Cosmos (ICCUB), University of Barcelona, Marti i Franques 1, Barcelona 08024 (Spain)

    2012-04-01

    With future galaxy surveys, a huge number of Fourier modes of the distribution of the large scale structures in the Universe will become available. These modes are complementary to those of the CMB and can be used to set constraints on models of the early universe, such as inflation. Using a MCMC analysis, we compare the power of the CMB with that of the combination of CMB and galaxy survey data, to constrain the power spectrum of primordial fluctuations generated during inflation. We base our analysis on the Planck satellite and a spectroscopic redshift survey with configuration parameters close to those of the Euclid mission as examples. We first consider models of slow-roll inflation, and show that the inclusion of large scale structure data improves the constraints by nearly halving the error bars on the scalar spectral index and its running. If we attempt to reconstruct the inflationary single-field potential, a similar conclusion can be reached on the parameters characterizing the potential. We then study models with features in the power spectrum. In particular, we consider ringing features produced by a break in the potential and oscillations such as in axion monodromy. Adding large scale structures improves the constraints on features by more than a factor of two. In axion monodromy we show that there are oscillations with small amplitude and frequency in momentum space that are undetected by CMB alone but can be measured by including galaxy surveys in the analysis.

  18. Constraining the Properties of Cold Interstellar Clouds

    Science.gov (United States)

    Spraggs, Mary Elizabeth; Gibson, Steven J.

    2016-01-01

    Since the interstellar medium (ISM) plays an integral role in star formation and galactic structure, it is important to understand the evolution of clouds over time, including the processes of cooling and condensation that lead to the formation of new stars. This work aims to constrain and better understand the physical properties of the cold ISM by utilizing large surveys of neutral atomic hydrogen (HI) 21cm spectral line emission and absorption, carbon monoxide (CO) 2.6mm line emission, and multi-band infrared dust thermal continuum emission. We identify areas where the gas may be cooling and forming molecules using HI self-absorption (HISA), in which cold foreground HI absorbs radiation from warmer background HI emission.We are developing an algorithm that uses total gas column densities inferred from Planck and other FIR/sub-mm data in parallel with CO and HISA spectral line data to determine the gas temperature, density, molecular abundance, and other properties as functions of position. We can then map these properties to study their variation throughout an individual cloud as well as any dependencies on location or environment within the Galaxy.Funding for this work was provided by the National Science Foundation, the NASA Kentucky Space Grant Consortium, the WKU Ogden College of Science and Engineering, and the Carol Martin Gatton Academy for Mathematics and Science in Kentucky.

  19. Maximum-biomass prediction of homofermentative Lactobacillus.

    Science.gov (United States)

    Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei

    2016-07-01

    Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C. PMID:26896862

  20. The maximum rate of mammal evolution

    Science.gov (United States)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-01-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461

  1. Performance of tuned liquid column dampers considering maximum liquid motion in seismic vibration control of structures

    Science.gov (United States)

    Chakraborty, Subrata; Debbarma, Rama; Marano, Giuseppe Carlo

    2012-03-01

    The optimum design of tuned liquid column damper (TLCD) is usually performed by minimizing the maximum response of structure subjected to stochastic earthquake load without imposing any restrictions on the possible maximum oscillation of the liquid within the vertical column. However, during strong earthquake motion, the maximum oscillation of vertical column of liquid may be equal to or greater than that of the container pipe. Consequently the physical behavior of the hydraulic system may change largely reducing its efficiency. The present study deals with the optimization of TLCD parameters to minimize the vibration effect of structures addressing the limitation on such excessive liquid displacement. This refers to the design of optimum TLCD system which not only assure maximum possible performance in terms of vibration mitigation, but also simultaneously put due importance to the natural constrained criterion of excessive lowering of liquid in the vertical column of TLCD. The constraint is imposed by limiting the maximum displacement of the liquid to the vertical height of the container. Numerical study is performed to elucidate the effect of constraint condition on the optimum parameters and overall performance of TLCD system of protection.

  2. Logical consistency and sum-constrained linear models

    NARCIS (Netherlands)

    van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.

    2006-01-01

    A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a

  3. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where M...

  4. Solving constrained minimax problem via nonsmooth equations method

    Institute of Scientific and Technical Information of China (English)

    GUO Xiu-xia(郭修霞)

    2004-01-01

    A new nonsmooth equations model of constrained minimax problem was derived. The generalized Newton method was applied for solving this system of nonsmooth equations system. A new algorithm for solving constrained minimax problem was established. The local superlinear and quadratic convergences of the algorithm were discussed.

  5. The Pendulum: From Constrained Fall to the Concept of Potential

    Science.gov (United States)

    Bevilacqua, Fabio; Falomo, Lidia; Fregonese, Lucio; Giannetto, Enrico; Giudice, Franco; Mascheretti, Paolo

    2006-01-01

    Kuhn underlined the relevance of Galileo's gestalt switch in the interpretation of a swinging body from constrained fall to time metre. But the new interpretation did not eliminate the older one. The constrained fall, both in the motion of pendulums and along inclined planes, led Galileo to the law of free fall. Experimenting with physical…

  6. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720 Section 888.3720 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  7. Surface damping effect of anchored constrained viscoelastic layers on the flexural response of simply supported structures

    Science.gov (United States)

    Karim, K. R.; Chen, G. D.

    2012-02-01

    Viscoelastic (VE) materials are commonly used to control vibration-induced fatigue in airframes and to suppress general vibration in various structures. This study investigates the effects of anchored constrained VE layers on the flexural response of simply supported Euler beams or plate strips under base excitations. Emphasis is placed on the development of two surface damping treatments: one VE layer anchored at one end, and two VE layers anchored at their different ends. Each anchorage is realized with a thin stiff layer in tension, such as a fiber reinforced polymer sheet, bonded to the surface of a VE layer and anchored to one end of the beam for maximum shear deformation in the constrained VE layer. Non-uniform shear deformation in VE layers is taken into account in the new solution formulation. Sensitivity analyses are performed to understand and quantify the effects of various parameters on flexural responses of the structures. The minimum thickness of VE layers is mainly bounded by the relative stiffness between the VE layers and the constraining face layer. The performances of various configurations are compared and the two-end anchored configuration is found most effective in vibration suppression.

  8. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  9. Constraining the Evolution of Poor Clusters

    Science.gov (United States)

    Broming, Emma J.; Fuse, C. R.

    2012-01-01

    There currently exists no method by which to quantify the evolutionary state of poor clusters (PCs). Research by Broming & Fuse (2010) demonstrated that the evolution of Hickson compact groups (HCGs) are constrained by the correlation between the X-ray luminosities of point sources and diffuse gas. The current investigation adopts an analogous approach to understanding PCs. Plionis et al. (2009) proposed a theory to define the evolution of poor clusters. The theory asserts that cannibalism of galaxies causes a cluster to become more spherical, develop increased velocity dispersion and increased X-ray temperature and gas luminosity. Data used to quantify the evolution of the poor clusters were compiled across multiple wavelengths. The sample includes 162 objects from the WBL catalogue (White et al. 1999), 30 poor clusters in the Chandra X-ray Observatory archive, and 15 Abell poor clusters observed with BAX (Sadat et al. 2004). Preliminary results indicate that the cluster velocity dispersion and X-ray gas and point source luminosities can be used to highlight a weak correlation. An evolutionary trend was observed for multiple correlations detailed herein. The current study is a continuation of the work by Broming & Fuse examining point sources and their properties to determine the evolutionary stage of compact groups, poor clusters, and their proposed remnants, isolated ellipticals and fossil groups. Preliminary data suggests that compact groups and their high-mass counterpart, poor clusters, evolve along tracks identified in the X-ray gas - X-ray point source relation. While compact groups likely evolve into isolated elliptical galaxies, fossil groups display properties that suggest they are the remains of fully coalesced poor clusters.

  10. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....

  11. COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT

    Directory of Open Access Journals (Sweden)

    PETRU SERGIU SERBAN

    2016-06-01

    Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.

  12. A maximum entropy framework for nonexponential distributions

    OpenAIRE

    Peterson, Jack; Dixit, Purushottam D.; Dill, Ken A.

    2013-01-01

    Many statistical distributions, particularly among social and biological systems, have “heavy tails,” which are situations where rare events are not as improbable as would have been guessed from more traditional statistics. Heavy-tailed distributions are the basis for the phrase “the rich get richer.” Here, we propose a basic principle underlying systems with heavy-tailed distributions. We show that it is the same principle (maximum entropy) used in statistical physics and statistics to estim...

  13. On the maximum drawdown during speculative bubbles

    CERN Document Server

    Rotundo, G; Navarra, Mauro; Rotundo, Giulia

    2006-01-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  14. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  15. Dynamical maximum entropy approach to flocking

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  16. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  17. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  18. A new unfolding code combining maximum entropy and maximum likelihood for neutron spectrum measurement

    International Nuclear Information System (INIS)

    We present a new spectrum unfolding code, the Maximum Entropy and Maximum Likelihood Unfolding Code (MEALU), based on the maximum likelihood method combined with the maximum entropy method, which can determine a neutron spectrum without requiring an initial guess spectrum. The Normal or Poisson distributions can be used for the statistical distribution. MEALU can treat full covariance data for a measured detector response and response function. The algorithm was verified through an analysis of mock-up data and its performance was checked by applying it to measured data. The results for measured data from the Joyo experimental fast reactor were also compared with those obtained by the conventional J-log method for neutron spectrum adjustment. It was found that MEALU has potential advantages over conventional methods with regard to preparation of a priori information and uncertainty estimation. (author)

  19. Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean

    Science.gov (United States)

    Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.

    2016-04-01

    Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).

  20. POST-MAXIMUM NEAR-INFRARED SPECTRA OF SN 2014J

    DEFF Research Database (Denmark)

    Sand, D. J.; Hsiao, E. Y.; Banerjee, D. P. K.;

    2016-01-01

    We present near-infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The 17 NIR spectra span epochs from +15.3 to +92.5 days after B-band maximum light, while the JHK(s) photometry include epochs from -10 to +71 days. These. data are. used to constrain...... in our post-maximum spectra, with a rough hydrogen mass limit of less than or similar to 0.1 M-circle dot, which is consistent with previous limits in SN. 2014J from late-time optical spectra of the H alpha line. Nonetheless, the growing data. set of high-quality NIR spectra holds the promise of very...

  1. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  2. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  3. Maximum Flux Transition Paths of Conformational Change

    CERN Document Server

    Zhao, Ruijun; Skeel, Robert D

    2009-01-01

    Given two metastable states A and B of a biomolecular system, the problem is to calculate the likely paths of the transition from A to B. Such a calculation is more informative and more manageable if done for a reduced set of collective variables chosen so that paths cluster in collective variable space. The computational task becomes that of computing the "center" of such a cluster. A good way to define the center employs the concept of a committor, whose value at a point in collective variable space is the probability that a trajectory at that point will reach B before A. The committor "foliates" the transition region into a collection of isocommittors. The maximum flux transition path is defined as a path that crosses each isocommittor at a point which (locally) has the highest crossing rate of distinct reactive trajectories. (This path is different from that of the MaxFlux method of Huo and Straub.) To make the calculation tractable, three approximations are introduced. It is shown that a maximum flux tra...

  4. Maximum neighborhood margin criterion in face recognition

    Science.gov (United States)

    Han, Pang Ying; Teoh, Andrew Beng Jin

    2009-04-01

    Feature extraction is a data analysis technique devoted to removing redundancy and extracting the most discriminative information. In face recognition, feature extractors are normally plagued with small sample size problems, in which the total number of training images is much smaller than the image dimensionality. Recently, an optimized facial feature extractor, maximum marginal criterion (MMC), was proposed. MMC computes an optimized projection by solving the generalized eigenvalue problem in a standard form that is free from inverse matrix operation, and thus it does not suffer from the small sample size problem. However, MMC is essentially a linear projection technique that relies on facial image pixel intensity to compute within- and between-class scatters. The nonlinear nature of faces restricts the discrimination of MMC. Hence, we propose an improved MMC, namely maximum neighborhood margin criterion (MNMC). Unlike MMC, which preserves global geometric structures that do not perfectly describe the underlying face manifold, MNMC seeks a projection that preserves local geometric structures via neighborhood preservation. This objective function leads to the enhancement of classification capability, and this is testified by experimental results. MNMC shows its performance superiority compared to MMC, especially in pose, illumination, and expression (PIE) and face recognition grand challenge (FRGC) databases.

  5. Constraining the margins of Neoproterozoic ice masses: depositional signature, palaeoflow and glaciodynamics

    Science.gov (United States)

    Busfield, Marie; Le Heron, Daniel

    2016-04-01

    The scale and distribution of Neoproterozoic ice masses remains poorly understood. The classic Snowball Earth hypothesis argues for globally extensive ice sheets, separated by small ocean refugia, yet the positions of palaeo-ice sheet margins and the extent of these open water regions are unknown. Abundant evidence worldwide for multiple cycles of ice advance and recession is suggestive of much more dynamic mass balance changes than previously predicted. Sedimentological analysis enables an understanding of the changing ice margin position to be gained through time, in some cases allowing it to be mapped. Where the maximum extent of ice advance varies within a given study area, predictions can also be made on the morphology of the ice margin, and the underlying controls on this morphology e.g. basin configuration. This can be illustrated using examples from the Neoproterozoic Kingston Peak Formation in the Death Valley region of western USA. Throughout the Sperry Wash, northern Kingston Range and southern Kingston Range study sites the successions show evidence of multiple cycles of ice advance and retreat, but the extent of maximum ice advance is extremely variable, reaching ice-contact conditions at Sperry Wash but only ice-proximal settings in the most distal southern Kingston Range. The overall advance is also much more pronounced at Sperry Wash, from ice-distal to ice-contact settings, as compared to ice-distal to ice-proximal settings in the southern Kingston Range. Therefore, the position of the ice margin can be located at the Sperry Wash study site, where the more pronounced progradation is used to argue for topographically constrained ice, feeding the unconstrained shelf through the northern into the southern Kingston Range. This raises the question as to whether Neoproterozoic ice masses could be defined as topographically constrained ice caps, or larger ice sheets feeding topographically constrained outlet glaciers.

  6. How We Can Constrain Aerosol Type Globally

    Science.gov (United States)

    Kahn, Ralph

    2016-01-01

    In addition to aerosol number concentration, aerosol size and composition are essential attributes needed to adequately represent aerosol-cloud interactions (ACI) in models. As the nature of ACI varies enormously with environmental conditions, global-scale constraints on particle properties are indicated. And although advanced satellite remote-sensing instruments can provide categorical aerosol-type classification globally, detailed particle microphysical properties are unobtainable from space with currently available or planned technologies. For the foreseeable future, only in situ measurements can constrain particle properties at the level-of-detail required for ACI, as well as to reduce uncertainties in regional-to-global-scale direct aerosol radiative forcing (DARF). The limitation of in situ measurements for this application is sampling. However, there is a simplifying factor: for a given aerosol source, in a given season, particle microphysical properties tend to be repeatable, even if the amount varies from day-to-day and year-to-year, because the physical nature of the particles is determined primarily by the regional environment. So, if the PDFs of particle properties from major aerosol sources can be adequately characterized, they can be used to add the missing microphysical detail the better sampled satellite aerosol-type maps. This calls for Systematic Aircraft Measurements to Characterize Aerosol Air Masses (SAM-CAAM). We are defining a relatively modest and readily deployable, operational aircraft payload capable of measuring key aerosol absorption, scattering, and chemical properties in situ, and a program for characterizing statistically these properties for the major aerosol air mass types, at a level-of-detail unobtainable from space. It is aimed at: (1) enhancing satellite aerosol-type retrieval products with better aerosol climatology assumptions, and (2) improving the translation between satellite-retrieved aerosol optical properties and

  7. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  8. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake...

  9. Warming, euxinia and sea level rise during the Paleocene–Eocene Thermal Maximum on the Gulf Coastal Plain: implications for ocean oxygenation and nutrient cycling

    NARCIS (Netherlands)

    Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.

    2014-01-01

    The Paleocene–Eocene Thermal Maximum(PETM, ?56 Ma) was a ?200 kyr episode of globalwarming, associated with massive injections of 13C-depletedcarbon into the ocean–atmosphere system. Although climatechange during the PETM is relatively well constrained,effects on marine oxygen concentrations and nut

  10. Warming, euxinia and sea level rise during the Paleocene–Eocene Thermal Maximum on the Gulf Coastal Plain: implications for ocean oxygenation and nutrient cycling

    NARCIS (Netherlands)

    Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.

    2014-01-01

    The Paleocene–Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean–atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations

  11. Generation of Granulites Constrained by Thermal Modeling

    Science.gov (United States)

    Depine, G. V.; Andronicos, C. L.; Phipps-Morgan, J.

    2006-12-01

    The heat source needed to generate granulites facies metamorphism is still an unsolved problem in geology. There is a close spatial relationship between granulite terrains and extensive silicic plutonism, suggesting heat advection by melts is critical to their formation. To investigate the role of heat advection by melt in the generation of granulites we use numerical 1-D models which include the movement of melt from the base of the crust to the middle crust. The model is in part constrained by petrological observations from the Coast Plutonic Complex (CPC) in British Columbia, Canada at ~ 54° N where migmatite and granulite are widespread. The model takes into account time dependent heat conduction and advection of melts generated at the base of the crust. The model starts with a crust of 55 km, consistent with petrologic and geochemical data from the CPC. The lower crust is assumed to be amphibolite in composition, consistent with seismologic and geochemical constraints for the CPC. An initial geothermal gradient estimated from metamorphic P-T-t paths in this region is ~37°C/km, hotter than normal geothermal gradients. The parameters used for the model are a coefficient of thermal conductivity of 2.5 W/m°C, a density for the crust of 2700 kg/m3 and a heat capacity of 1170 J/Kg°C. Using the above starting conditions, a temperature of 1250°C is assumed for the mantle below 55 km, equivalent to placing asthenosphere in contact with the base of the crust to simulate delamination, basaltic underplating and/or asthenospheric exposure by a sudden steepening of slab. This condition at 55 km results in melting the amphibolite in the lower crust. Once a melt fraction of 10% is reached the melt is allowed to migrate to a depth of 13 km, while material at 13 km is displaced downwards to replace the ascending melts. The steady-state profile has a very steep geothermal gradient of more than 50°C/km from the surface to 13 km, consistent with the generation of andalusite

  12. Post-maximum near infrared spectra of SN 2014J: A search for interaction signatures

    CERN Document Server

    Sand, D J; Banerjee, D P K; Marion, G H; Diamond, T R; Joshi, V; Parrent, J T; Phillips, M M; Stritzinger, M D; Venkataraman, V

    2016-01-01

    We present near infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The seventeen NIR spectra span epochs from +15.3 to +92.5 days after $B$-band maximum light, while the $JHK_s$ photometry include epochs from $-$10 to +71 days. This data is used to constrain the progenitor system of SN 2014J utilizing the Pa$\\beta$ line, following recent suggestions that this phase period and the NIR in particular are excellent for constraining the amount of swept up hydrogen-rich material associated with a non-degenerate companion star. We find no evidence for Pa$\\beta$ emission lines in our post-maximum spectra, with a rough hydrogen mass limit of $\\lesssim$0.1 $M_{\\odot}$, which is consistent with previous limits in SN 2014J from late-time optical spectra of the H$\\alpha$ line. Nonetheless, the growing dataset of high-quality NIR spectra holds the promise of very useful hydrogen constraints.

  13. Diffusivity Maximum in a Reentrant Nematic Phase

    Directory of Open Access Journals (Sweden)

    Martin Schoen

    2012-06-01

    Full Text Available We report molecular dynamics simulations of confined liquid crystals using the Gay–Berne–Kihara model. Upon isobaric cooling, the standard sequence of isotropic–nematic–smectic A phase transitions is found. Upon further cooling a reentrant nematic phase occurs. We investigate the temperature dependence of the self-diffusion coefficient of the fluid in the nematic, smectic and reentrant nematic phases. We find a maximum in diffusivity upon isobaric cooling. Diffusion increases dramatically in the reentrant phase due to the high orientational molecular order. As the temperature is lowered, the diffusion coefficient follows an Arrhenius behavior. The activation energy of the reentrant phase is found in reasonable agreement with the reported experimental data. We discuss how repulsive interactions may be the underlying mechanism that could explain the occurrence of reentrant nematic behavior for polar and non-polar molecules.

  14. Video segmentation using Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei

    2005-01-01

    Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.

  15. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  16. Shape Modelling Using Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2001-01-01

    of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation....... Both these types of knowledge may be used to defined Shape Maximum Autocorrelation Factors. The resulting point distribution models are compared to ordinary principal components analysis using leave-one-out validation....

  17. Co-Clustering under the Maximum Norm

    Directory of Open Access Journals (Sweden)

    Laurent Bulteau

    2016-02-01

    Full Text Available Co-clustering, that is partitioning a numerical matrix into “homogeneous” submatrices, has many applications ranging from bioinformatics to election analysis. Many interesting variants of co-clustering are NP-hard. We focus on the basic variant of co-clustering where the homogeneity of a submatrix is defined in terms of minimizing the maximum distance between two entries. In this context, we spot several NP-hard, as well as a number of relevant polynomial-time solvable special cases, thus charting the border of tractability for this challenging data clustering problem. For instance, we provide polynomial-time solvability when having to partition the rows and columns into two subsets each (meaning that one obtains four submatrices. When partitioning rows and columns into three subsets each, however, we encounter NP-hardness, even for input matrices containing only values from {0, 1, 2}.

  18. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  19. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  20. Spacecraft Maximum Allowable Concentrations for Airborne Contaminants

    Science.gov (United States)

    James, John T.

    2008-01-01

    The enclosed table lists official spacecraft maximum allowable concentrations (SMACs), which are guideline values set by the NASA/JSC Toxicology Group in cooperation with the National Research Council Committee on Toxicology (NRCCOT). These values should not be used for situations other than human space flight without careful consideration of the criteria used to set each value. The SMACs take into account a number of unique factors such as the effect of space-flight stress on human physiology, the uniform good health of the astronauts, and the absence of pregnant or very young individuals. Documentation of the values is given in a 5 volume series of books entitled "Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants" published by the National Academy Press, Washington, D.C. These books can be viewed electronically at http://books.nap.edu/openbook.php?record_id=9786&page=3. Short-term (1 and 24 hour) SMACs are set to manage accidental releases aboard a spacecraft and permit risk of minor, reversible effects such as mild mucosal irritation. In contrast, the long-term SMACs are set to fully protect healthy crewmembers from adverse effects resulting from continuous exposure to specific air pollutants for up to 1000 days. Crewmembers with allergies or unusual sensitivity to trace pollutants may not be afforded complete protection, even when long-term SMACs are not exceeded. Crewmember exposures involve a mixture of contaminants, each at a specific concentration (C(sub n)). These contaminants could interact to elicit symptoms of toxicity even though individual contaminants do not exceed their respective SMACs. The air quality is considered acceptable when the toxicity index (T(sub grp)) for each toxicological group of compounds is less than 1, where T(sub grp), is calculated as follows: T(sub grp) = C(sub 1)/SMAC(sub 1) + C(sub 2/SMAC(sub 2) + ...+C(sub n)/SMAC(sub n).

  1. A connection theory for a nonlinear differential constrained system

    Institute of Scientific and Technical Information of China (English)

    许志新; 郭永新; 吴炜

    2002-01-01

    An Ehresmann connection on a constrained state bundle defined by nonlinear differential constraints is constructed for nonlinear nonholonomic systems. A set of differential constraints is integrable if and only if the curvature of the Ehresmann connection vanishes. Based on a geometric interpretation of d-δ commutation relations in constrained dynamics given in this paper, the complete integrability conditions for the differential constraints are proven to be equivalent to the three requirements upon the conditional variation in mechanics: (1) the variations belong to the constrained manifold; (2) the time derivative commutes with variational operator; (3) the variations satisfy the Chetaev's conditions.

  2. Solving constrained traveling salesman problems by genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    WU Chunguo; LIANG Yanchun; LEE Heowpueh; LU Chun; LIN Wuzhong

    2004-01-01

    Three kinds of constrained traveling salesman problems (TSP) arising from application problems, namely the open route TSP, the end-fixed TSP, and the path-constrained TSP, are proposed. The corresponding approaches based on modified genetic algorithms (GA) for solving these constrained TSPs are presented. Numerical experiments demonstrate that the algorithm for the open route TSP shows its advantages when the open route is required, the algorithm for the end-fixed TSP can deal with route optimization with constraint of fixed ends effectively, and the algorithm for the path-constraint could benefit the traffic problems where some cities cannot be visited from each other.

  3. Processing Constrained K Closest Pairs Query in Spatial Databases

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaofeng; LIU Yunsheng; XIAO Yingyuan

    2006-01-01

    In this paper, constrained K closest pairs query is introduced, which retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R-trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them,two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.

  4. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  5. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    CERN Document Server

    Hall, Alex

    2016-01-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...

  6. Mantle Convection Models Constrained by Seismic Tomography

    Science.gov (United States)

    Durbin, C. J.; Shahnas, M.; Peltier, W. R.; Woodhouse, J. H.

    2011-12-01

    Perovskite-post-Perovskite transition (Murakami et al., 2004, Science) that appears to define the D" layer at the base of the mantle. In this initial phase of what will be a longer term project we are assuming that the internal mantle viscosity structure is spherically symmetric and compatible with the recent inferences of Peltier and Drummond (2010, Geophys. Res. Lett.) based upon glacial isostatic adjustment and Earth rotation constraints. The internal density structure inferred from the tomography model is assimilated into the convection model by continuously "nudging" the modification to the input density structure predicted by the convection model back towards the tomographic constraint at the long wavelengths that the tomography specifically resolves, leaving the shorter wavelength structure free to evolve, essentially "slaved" to the large scale structure. We focus upon the ability of the nudged model to explain observed plate velocities, including both their poloidal (divergence related) and toroidal (strike slip fault related) components. The true plate velocity field is then used as an additional field towards which the tomographically constrained solution is nudged.

  7. Carbon-constrained scenarios. Final report

    International Nuclear Information System (INIS)

    This report provides the results of the study entitled 'Carbon-Constrained Scenarios' that was funded by FONDDRI from 2004 to 2008. The study was achieved in four steps: (i) Investigating the stakes of a strong carbon constraint for the industries participating in the study, not only looking at the internal decarbonization potential of each industry but also exploring the potential shifts of the demand for industrial products. (ii) Developing an hybrid modelling platform based on a tight dialog between the sectoral energy model POLES and the macro-economic model IMACLIM-R, in order to achieve a consistent assessment of the consequences of an economy-wide carbon constraint on energy-intensive industrial sectors, while taking into account technical constraints, barriers to the deployment of new technologies and general economic equilibrium effects. (iii) Producing several scenarios up to 2050 with different sets of hypotheses concerning the driving factors for emissions - in particular the development styles. (iv) Establishing an iterative dialog between researchers and industry representatives on the results of the scenarios so as to improve them, but also to facilitate the understanding and the appropriate use of these results by the industrial partners. This report provides the results of the different scenarios computed in the course of the project. It is a partial synthesis of the work that has been accomplished and of the numerous exchanges that this study has induced between modellers and stakeholders. The first part was written in April 2007 and describes the first reference scenario and the first mitigation scenario designed to achieve stabilization at 450 ppm CO2 at the end of the 21. century. This scenario has been called 'mimetic' because it has been build on the assumption that the ambitious climate policy would coexist with a progressive convergence of development paths toward the current paradigm of industrialized countries: urban sprawl, general

  8. Free and constrained symplectic integrators for numerical general relativity

    CERN Document Server

    Richter, Ronny

    2008-01-01

    We consider symplectic time integrators in numerical General Relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively 1+1-dimensional versions of Einstein's equations, that allow to investigate a perturbed Minkowski problem and the Schwarzschild space-time. We compare symplectic and non-symplectic integrators for free evolution, showing very diffe...

  9. FXR agonist activity of conformationally constrained analogs of GW 4064

    Energy Technology Data Exchange (ETDEWEB)

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y.; Caldwell, Richard D.; Caravella, Justin A.; Chen, Lihong; Creech, Katrina L.; Deaton, David N.; Madauss, Kevin P.; Marr, Harry B.; McFadyen, Robert B.; Miller, Aaron B.; Navas, III, Frank; Parks, Derek J.; Spearing, Paul K.; Todd, Dan; Williams, Shawn P.; Wisely, G. Bruce; (GSKNC)

    2010-09-27

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  10. Time-dependent constrained Hamiltonian systems and Dirac brackets

    Energy Technology Data Exchange (ETDEWEB)

    Leon, Manuel de [Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Marrero, Juan C. [Departamento de Matematica Fundamental, Facultad de Matematicas, Universidad de La Laguna, La Laguna, Tenerife, Canary Islands (Spain); Martin de Diego, David [Departamento de Economia Aplicada Cuantitativa, Facultad de Ciencias Economicas y Empresariales, UNED, Madrid (Spain)

    1996-11-07

    In this paper the canonical Dirac formalism for time-dependent constrained Hamiltonian systems is globalized. A time-dependent Dirac bracket which reduces to the usual one for time-independent systems is introduced. (author)

  11. Time-dependent constrained Hamiltonian systems and Dirac brackets

    International Nuclear Information System (INIS)

    In this paper the canonical Dirac formalism for time-dependent constrained Hamiltonian systems is globalized. A time-dependent Dirac bracket which reduces to the usual one for time-independent systems is introduced. (author)

  12. Bayesian item selection in constrained adaptive testing using shadow tests

    OpenAIRE

    Bernard P. Veldkamp

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approac...

  13. Kaon photoproduction on the nucleon with constrained parameters

    CERN Document Server

    Nelson, R

    2009-01-01

    The new experimental data of kaon photoproduction on the nucleon, gamma p -> K+ Lambda, have been analyzed by means of a multipoles model. Different from the previous models, in this analysis the resonance decay widths are constrained to the values given by the Particle Data Group (PDG). The result indicates that constraining these parameters to the PDG values could dramatically change the conclusion of the important resonances in this reaction found in the previous studies.

  14. Constrained multi-degree reduction with respect to Jacobi norms

    KAUST Repository

    Ait-Haddou, Rachid

    2015-12-31

    We show that a weighted least squares approximation of Bézier coefficients with factored Hahn weights provides the best constrained polynomial degree reduction with respect to the Jacobi L2L2-norm. This result affords generalizations to many previous findings in the field of polynomial degree reduction. A solution method to the constrained multi-degree reduction with respect to the Jacobi L2L2-norm is presented.

  15. Constraining the Initial Phase in Water-Fat Separation

    OpenAIRE

    Bydder, Mark; Yokoo, Takeshi; Yu, Huanzhou; Carl, Michael; Reeder, Scott B.; Sirlin, Claude B.

    2010-01-01

    An algorithm is described for use in chemical shift based water-fat separation to constrain the phase of both species to be equal at an echo-time of zero. This constraint is physically reasonable since the initial phase should be a property of the excitation pulse and receiver coil only. The advantages of phase-constrained water-fat separation, namely improved noise performance and/or reduced data requirements (fewer echos), are demonstrated in simulations and experiments.

  16. Constrained minimization of smooth functions using a genetic algorithm

    Science.gov (United States)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  17. Geometric constrained variational calculus. II: The second variation (Part I)

    Science.gov (United States)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  18. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  19. Canonical symmetry properties of the constrained singular generalized mechanical system

    Institute of Scientific and Technical Information of China (English)

    李爱民; 江金环; 李子平

    2003-01-01

    Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account, the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.

  20. Canonical symmetry properties of the constrained singular generalized mechanical system

    Institute of Scientific and Technical Information of China (English)

    LiAi-Min; JiangJin-Huan; LiZi-Ping

    2003-01-01

    Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account,the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.

  1. Algorithms for degree-constrained Euclidean Steiner minimal tree

    Institute of Scientific and Technical Information of China (English)

    Zhang Jin; Ma Liang; Zhang Liantang

    2008-01-01

    A new problem of degree-constrained Euclidean Steiner minimal tree is discussed,which is quite useful in several fields.Although it is slightly different from the traditional degree-constrained minimal spanning tree,it is aho NP-hard.Two intelligent algorithms are proposed in an attempt to solve this difficult problem.Series of numerical examples are tested,which demonstrate that the algorithms also work well in practice.

  2. Constraining the volatile fraction of planets from transit observations

    OpenAIRE

    Alibert, Yann

    2016-01-01

    The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount...

  3. Optimal preliminary propeller design using nonlinear constrained mathematical programming technique

    OpenAIRE

    Radojčić, D.

    1985-01-01

    Presented is a nonlinear constrained optimization technique applied to optimal propeller design at the preliminary design stage. The optimization method used is Sequential Unconstrained Minimization Technique - SUMT, which can treat equality and inequality, or only inequality constraints. Both approaches are shown. Application is given for Wageningen B-series and Gawn series propellers. The problem is solved on an Apple II microcomputer. One of the advantages of treating the constrained ...

  4. Constraining Gravity with LISA Detections of Binaries

    Science.gov (United States)

    Canizares, P.; Gair, J. R.; Sopuerta, C. F.

    2013-01-01

    General Relativity (GR) describes gravitation well at the energy scales which we have so far been able to achieve or detect. However, we do not know whether GR is behind the physics governing stronger gravitational field regimes, such as near neutron stars or massive black-holes (MBHs). Gravitational-wave (GW) astronomy is a promising tool to test and validate GR and/or potential alternative theories of gravity. The information that a GW waveform carries not only will allow us to map the strong gravitational field of its source, but also determine the theory of gravity ruling its dynamics. In this work, we explore the extent to which we could distinguish between GR and other theories of gravity through the detection of low-frequency GWs from extreme-mass-ratio inspirals (EMRIs) and, in particular, we focus on dynamical Chern-Simons modified gravity (DCSMG). To that end, we develop a framework that enables us, for the first time, to perform a parameter estimation analysis for EMRIs in DCSMG. Our model is described by a 15-dimensional parameter space, that includes the Chern-Simons (CS) parameter which characterises the deviation between the two theories, and our analysis is based on Fisher information matrix techniques together with a (maximum-mismatch) criterion to assess the validity of our results. In our analysis, we study a 5-dimensional parameter space, finding that a GW detector like the Laser Interferometer Space Antenna (LISA) or eLISA (evolved LISA) should be able to discriminate between GR and DCSMG with fractional errors below 5%, and hence place bounds four orders of magnitude better than current Solar System bounds.

  5. Theoretical Estimate of Maximum Possible Nuclear Explosion

    Science.gov (United States)

    Bethe, H. A.

    1950-01-31

    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  6. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  7. Mammographic image restoration using maximum entropy deconvolution

    CERN Document Server

    Jannetta, A; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution fe...

  8. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  9. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-06-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a~significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land

  10. Constraining global methane emissions and uptake by ecosystems

    Directory of Open Access Journals (Sweden)

    R. Spahni

    2011-01-01

    Full Text Available Natural methane (CH4 emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N, naturally inundated wetlands (60° S–45° N, rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we adapt model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature

  11. 20 CFR 211.14 - Maximum creditable compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...

  12. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ... civil monetary penalties per the Inflation Act. See 74 FR 68701 (December 29, 2009). FRA's maximum and... Transportation's (DOT) Civil Penalties Inflation Adjustment,'' dated July 10, 2003; (2) policy paper entitled... determine if the minimum civil monetary penalty (CMP) should be updated according to the Inflation...

  13. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  14. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  15. Evidence that the maximum electron energy in hotspots of FR II galaxies is not determined by synchrotron cooling

    CERN Document Server

    Araudo, Anabella T; Crilly, Aidan; Blundell, Katherine M

    2016-01-01

    It has been suggested that relativistic shocks in extragalactic sources may accelerate the highest energy cosmic rays. The maximum energy to which cosmic rays can be accelerated depends on the structure of magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a PeV. We study the hotspots of powerful radiogalaxies, where electrons accelerated at the termination shock emit synchrotron radiation. The turnover of the synchrotron spectrum is typically observed between infrared and optical frequencies, indicating that the maximum energy of non-thermal electrons accelerated at the shock is < TeV for a canonical magnetic field of ~100 micro Gauss. Based on theoretical considerations we show that this maximum energy cannot be constrained by synchrotron losses as usually assumed, unless the jet density is unreasonably large and most of the jet upstream energy goes to non-thermal particles. We test ...

  16. The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.

  17. Constrained quantities in uncertainty quantification. Ambiguity and tips to follow

    International Nuclear Information System (INIS)

    The nuclear community relies heavily on computer codes and numerical tools. The results of such computations can only be trusted if they are augmented by proper sensitivity and uncertainty (S and U) studies. This paper presents some aspects of S and U analysis when constrained quantities are involved, such as the fission spectrum or the isotopic distribution of elements. A consistent theory is given for the derivation and interpretation of constrained sensitivities as well as the corresponding covariance matrix normalization procedures. It is shown that if the covariance matrix violates the “generic zero column and row sum” condition, normalizing it is equivalent to constraining the sensitivities, but since both can be done in many ways different sensitivity coefficients and uncertainties can be derived. This makes results ambiguous, underlining the need for proper covariance data. It is also highlighted that the use of constrained sensitivity coefficients derived with a constraining procedure that is not idempotent can lead to biased results in uncertainty propagation. The presented theory is demonstrated on an analytical case and a numerical example involving the fission spectrum, both confirming the main conclusions of this research. (author)

  18. Constrained Local UniversE Simulations: a Local Group factory

    Science.gov (United States)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  19. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  20. CONCOLOR: Constrained Non-Convex Low-Rank Model for Image Deblocking.

    Science.gov (United States)

    Zhang, Jian; Xiong, Ruiqin; Zhao, Chen; Zhang, Yongbing; Ma, Siwei; Gao, Wen

    2016-03-01

    Due to independent and coarse quantization of transform coefficients in each block, block-based transform coding usually introduces visually annoying blocking artifacts at low bitrates, which greatly prevents further bit reduction. To alleviate the conflict between bit reduction and quality preservation, deblocking as a post-processing strategy is an attractive and promising solution without changing existing codec. In this paper, in order to reduce blocking artifacts and obtain high-quality image, image deblocking is formulated as an optimization problem within maximum a posteriori framework, and a novel algorithm for image deblocking using constrained non-convex low-rank model is proposed. The ℓ(p) (0 singular values of a matrix to characterize low-rank prior model rather than the nuclear norm, while the quantization constraint is explicitly transformed into the feasible solution space to constrain the non-convex low-rank optimization. Moreover, a new quantization noise model is developed, and an alternatively minimizing strategy with adaptive parameter adjustment is developed to solve the proposed optimization problem. This parameter-free advantage enables the whole algorithm more attractive and practical. Experiments demonstrate that the proposed image deblocking algorithm outperforms the current state-of-the-art methods in both the objective quality and the perceptual quality. PMID:26761774

  1. Constraining regional greenhouse gas emissions using geostationary concentration measurements: a theoretical study

    Directory of Open Access Journals (Sweden)

    P. J. Rayner

    2014-02-01

    Full Text Available We investigate the ability of column-integrated trace gas measurements from a geostationary satellite to constrain surface fluxes at regional scale. The proposed geoCARB instrument measures CO2, CO and CH4 at a maximum resolution of 3 km east–west × 2.7 km north–south. Precisions are 3 ppm for CO2, 10 ppb for CO and 18 ppb for CH4. Sampling frequency is flexible. Here we sample a region at the location of Shanghai every 2 daylight hours for 6 days in June. We test the observing system by calculating the posterior uncertainty covariance of fluxes. We are able to constrain urban emissions at 3 km resolution including an isolated power-plant. The CO measurement plays the strongest role; without it our effective resolution falls to 5 km. Methane fluxes are similarly well-estimated at 5 km resolution. Estimating the errors for a full year suggests such an instrument would be a useful tool for both science and policy applications.

  2. Residual flexibility test method for verification of constrained structural models

    Science.gov (United States)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.

    1994-01-01

    A method is described for deriving constrained modes and frequencies from a reduced model based on a subset of the free-free modes plus the residual effects of neglected modes. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow development of a verified constrained (fixed-base) structural model. Results for two spaceflight structures having translational boundary degrees of freedom show quick convergence of constrained modes using a measureable number of free-free modes plus the boundary partition of the residual flexibility matrix. This paper presents the free-free residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  3. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.;

    2014-01-01

    in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation......A main characteristic of the Internet of Things networks is the large number of resource-constrained nodes, which, however, are required to perform reliable and fast data exchange; often of critical nature; over highly unpredictable and dynamic connections and network topologies. Reducing...... for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...

  4. Constraining the Kerr parameters via X-ray reflection spectroscopy

    CERN Document Server

    Ghasemi-Nodehi, M

    2016-01-01

    In a recent paper [Ghasemi-Nodehi & Bambi, EPJC 76 (2016) 290], we have proposed a new parametrization for testing the Kerr nature of astrophysical black hole candidates. In the present work, we study the possibility of constraining the "Kerr parameters" of our proposal using X-ray reflection spectroscopy, the so-called iron line method. We simulate observations with the LAD instrument on board of the future eXTP mission assuming an exposure time of 200 ks. We fit the simulated data to see if the Kerr parameters can be constrained. If we have the correct astrophysical model, 200 ks observations with LAD/eXTP can constrain all the Kerr parameters with the exception of $b_{11}$, whose impact on the iron line profile is extremely weak and its measurement looks very challenging.

  5. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  6. Asymptotics for maximum score method under general conditions

    OpenAIRE

    Taisuke Otsu; Myung Hwan Seo

    2014-01-01

    Abstract. Since Manski's (1975) seminal work, the maximum score method for discrete choice models has been applied to various econometric problems. Kim and Pollard (1990) established the cube root asymptotics for the maximum score estimator. Since then, however, econometricians posed several open questions and conjectures in the course of generalizing the maximum score approach, such as (a) asymptotic distribution of the conditional maximum score estimator for a panel data dynamic discrete ch...

  7. Statistical optimization for passive scalar transport: maximum entropy production vs. maximum Kolmogorov–Sinay entropy

    Directory of Open Access Journals (Sweden)

    M. Mihelich

    2014-11-01

    Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.

  8. Performance Comparison of Constrained Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Soudeh Babaeizadeh

    2015-06-01

    Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.

  9. Robust head pose estimation using locality-constrained sparse coding

    Science.gov (United States)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu

    2015-12-01

    Sparse coding (SC) method has been shown to deliver successful result in a variety of computer vision applications. However, it does not consider the underlying structure of the data in the feature space. On the other hand, locality constrained linear coding (LLC) utilizes locality constraint to project each input data into its local-coordinate system. Based on the recent success of LLC, we propose a novel locality-constrained sparse coding (LSC) method to overcome the limitation of the SC. In experiments, the proposed algorithms were applied to head pose estimation applications. Experimental results demonstrated that the LSC method is better than state-of-the-art methods.

  10. Asymmetric biclustering with constrained von Mises-Fisher models

    Science.gov (United States)

    Watanabe, Kazuho; Wu, Hsiang-Yun; Takahashi, Shigeo; Fujishiro, Issei

    2016-03-01

    As a probability distribution on the high-dimensional sphere, the von Mises-Fisher (vMF) distribution is widely used for directional statistics and data analysis methods based on correlation. We consider a constrained vMF distribution for block modeling, which provides a probabilistic model of an asymmetric biclustering method that uses correlation as the similarity measure of data features. We derive the variational Bayesian inference algorithm for the mixture of the constrained vMF distributions. It is applied to a multivariate data visualization method implemented with enhanced parallel coordinate plots.

  11. Key Update Assistant for Resource-Constrained Networks

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2012-01-01

    Key update is a challenging task in resource-constrained networks where limitations in terms of computation, memory, and energy restrict the proper use of security mechanisms. We present an automated tool that computes the optimal key update strategy for any given resource-constrained network. We...... developed a push-button solution - powered by stochastic model checking - that network designers can easily benefit from, and it paves the way for consumers to set up key update related security parameters. Key Update Assistant, as we named it, runs necessary model checking operations and determines...

  12. Constraining the Axion Portal with B -> K l+ l-

    OpenAIRE

    Freytsis, Marat; Ligeti, Zoltan; Thaler, Jesse

    2009-01-01

    We investigate the bounds on axionlike states from flavor-changing neutral current b->s decays, assuming the axion couples to the standard model through mixing with the Higgs sector. Such GeV-scale axions have received renewed attention in connection with observed cosmic ray excesses. We find that existing B->K l+ l- data impose stringent bounds on the axion decay constant in the multi-TeV range, relevant for constraining the "axion portal" model of dark matter. Such bounds also constrain lig...

  13. Constraining the Axion Portal with B -> K l+ l-

    CERN Document Server

    Freytsis, Marat; Thaler, Jesse

    2009-01-01

    We investigate the bounds on axion-like states from flavor-changing neutral current b->s decays, assuming the axion couples to the standard model through mixing with the Higgs sector. Such GeV-scale axions have received renewed attention in connection with observed cosmic ray excesses. We find that existing B->K l+ l- data impose stringent bounds on the axion decay constant in the multi-TeV range, relevant for constraining the "axion portal" model of dark matter. Such bounds also constrain light Higgs scenarios in the NMSSM. These bounds can be improved by dedicated searches in B-factory data and at LHCb.

  14. In vitro transcription of a torsionally constrained template

    DEFF Research Database (Denmark)

    Bentin, Thomas; Nielsen, Peter E

    2002-01-01

    by rotary locked boundaries. Furthermore, RNAPs may be located in factories or attached to matrix sites limiting or prohibiting rotation. Indeed, the nascent RNA alone has been implicated in rotary constraining RNAP. Here we have investigated the consequences of rotary constraints during transcription...... mimicking a SAR/MAR attachment. We used this construct as a torsionally constrained template for transcription of the beta-lactamase gene by Escherichia coli RNAP and found that RNA synthesis displays similar characteristics in terms of rate of elongation whether or not the template is torsionally...

  15. A lexicographic approach to constrained MDP admission control

    Science.gov (United States)

    Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo

    2016-02-01

    This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.

  16. Constrained caloric curves and phase transition for hot nuclei

    CERN Document Server

    Borderie, Bernard; Rivet, M F; Raduta, Ad R; Ademard, G; Bonnet, E; Bougault, R; Chbihi, A; Frankland, J D; Galichet, E; Gruyer, D; Guinet, D; Lautesse, P; Neindre, N Le; Lopez, O; Marini, P; Parlog, M; Pawlowski, P; Rosato, E; Roy, R; Vigilante, M

    2013-01-01

    Simulations based on experimental data obtained from multifragmenting quasi-fused nuclei produced in central $^{129}$Xe + $^{nat}$Sn collisions have been used to deduce event by event freeze-out properties in the thermal excitation energy range 4-12 AMeV [Nucl. Phys. A809 (2008) 111]. From these properties and the temperatures deduced from proton transverse momentum fluctuations, constrained caloric curves have been built. At constant average volumes caloric curves exhibit a monotonic behaviour whereas for constrained pressures a backbending is observed. Such results support the existence of a first order phase transition for hot nuclei.

  17. 40 CFR 94.107 - Determination of maximum test speed.

    Science.gov (United States)

    2010-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  18. 14 CFR 25.1505 - Maximum operating limit speed.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  19. The maximum of Brownian motion with parabolic drift

    OpenAIRE

    Janson, Svante; Louchard, Guy; Martin-Löf, Anders

    2010-01-01

    We study the maximum of a Brownian motion with a parabolic drift; this is a random variable that often occurs as a limit of the maximum of discrete processes whose expectations have a maximum at an interior point. We give new series expansions and integral formulas for the distribution and the first two moments, together with numerical values to high precision.

  20. 7 CFR 4290.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... Financing of Enterprises by RBICs Structuring Rbic Financing of Eligible Enterprises-Types of Financings § 4290.840 Maximum term of Financing. The maximum term of any Debt Security must be no longer than 20... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum term of Financing. 4290.840 Section...

  1. The Power and Robustness of Maximum LOD Score Statistics

    OpenAIRE

    YOO, Y. J.; MENDELL, N.R.

    2008-01-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value.

  2. 78 FR 67465 - Loan Guaranty: Maximum Allowable Attorney Fees

    Science.gov (United States)

    2013-11-12

    ... AFFAIRS Loan Guaranty: Maximum Allowable Attorney Fees AGENCY: Department of Veterans Affairs (VA). ACTION... (VA) Home Loan Guaranty program concerning the maximum attorney fees allowable in calculating the... maximum attorney fees will be allowed for all loan terminations completed on or after December 12,...

  3. Geographic variation of surface energy partitioning in the climatic mean predicted from the maximum power limit

    CERN Document Server

    Dhara, Chirag; Kleidon, Axel

    2015-01-01

    Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.

  4. Post-maximum Near-infrared Spectra of SN 2014J: A Search for Interaction Signatures

    Science.gov (United States)

    Sand, D. J.; Hsiao, E. Y.; Banerjee, D. P. K.; Marion, G. H.; Diamond, T. R.; Joshi, V.; Parrent, J. T.; Phillips, M. M.; Stritzinger, M. D.; Venkataraman, V.

    2016-05-01

    We present near-infrared (NIR) spectroscopic and photometric observations of the nearby Type Ia SN 2014J. The 17 NIR spectra span epochs from +15.3 to +92.5 days after B-band maximum light, while the {{JHK}}s photometry include epochs from -10 to +71 days. These data are used to constrain the progenitor system of SN 2014J utilizing the Paβ line, following recent suggestions that this phase period and the NIR in particular are excellent for constraining the amount of swept-up hydrogen-rich material associated with a non-degenerate companion star. We find no evidence for Paβ emission lines in our post-maximum spectra, with a rough hydrogen mass limit of ≲ 0.1 M ⊙, which is consistent with previous limits in SN 2014J from late-time optical spectra of the Hα line. Nonetheless, the growing data set of high-quality NIR spectra holds the promise of very useful hydrogen constraints. Based on observations obtained at the Gemini Observatory under program GN-2014A-Q-8 (PI: Sand). Gemini is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).

  5. Creation of Closed or Open Universe from Constrained Instanton

    OpenAIRE

    Wu, Z. C.

    1998-01-01

    In the no-boundary univers e the universe is created from an instanton. However, there does not exist any instanton for the ``realistic'' $FRW$ universe with a scalar field. The ``instanton'' leading to its quantum creation may be modified and reinterpreted as a constrained gravitational instanton.

  6. Testing a Constrained MPC Controller in a Process Control Laboratory

    Science.gov (United States)

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  7. Bayesian item selection in constrained adaptive testing using shadow tests

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specificati

  8. Constrained Transport vs. Divergence Cleanser Options in Astrophysical MHD Simulations

    Science.gov (United States)

    Lindner, Christopher C.; Fragile, P.

    2009-01-01

    In previous work, we presented results from global numerical simulations of the evolution of black hole accretion disks using the Cosmos++ GRMHD code. In those simulations we solved the magnetic induction equation using an advection-split form, which is known not to satisfy the divergence-free constraint. To minimize the build-up of divergence error, we used a hyperbolic cleanser function that simultaneously damped the error and propagated it off the grid. We have since found that this method produces qualitatively and quantitatively different behavior in high magnetic field regions than results published by other research groups, particularly in the evacuated funnels of black-hole accretion disks where Poynting-flux jets are reported to form. The main difference between our earlier work and that of our competitors is their use of constrained-transport schemes to preserve a divergence-free magnetic field. Therefore, to study these differences directly, we have implemented a constrained transport scheme into Cosmos++. Because Cosmos++ uses a zone-centered, finite-volume method, we can not use the traditional staggered-mesh constrained transport scheme of Evans & Hawley. Instead we must implement a more general scheme; we chose the Flux-CT scheme as described by Toth. Here we present comparisons of results using the divergence-cleanser and constrained transport options in Cosmos++.

  9. Multiply-Constrained Semantic Search in the Remote Associates Test

    Science.gov (United States)

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  10. Constrained variational calculus: the second variation (part I)

    CERN Document Server

    Massa, Enrico; Pagani, Enrico; Luria, Gianvittorio

    2010-01-01

    This paper is a direct continuation of arXiv:0705.2362 . The Hamiltonian aspects of the theory are further developed. Within the framework provided by the first paper, the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A necessary and sufficient condition for minimality is proved.

  11. Revenue Prediction in Budget-constrained Sequential Auctions with Complementarities

    NARCIS (Netherlands)

    S. Verwer (Sicco); Y. Zhang (Yingqian)

    2011-01-01

    textabstractWhen multiple items are auctioned sequentially, the ordering of auctions plays an important role in the total revenue collected by the auctioneer. This is true especially with budget constrained bidders and the presence of complementarities among items. In such sequential auction setting

  12. Reserve-constrained economic dispatch: Cost and payment allocations

    Energy Technology Data Exchange (ETDEWEB)

    Misraji, Jaime [Sistema Electrico Nacional Interconectado de la Republica Dominicana, Calle 3, No. 3, Arroyo Hondo 1, Santo Domingo, Distrito Nacional (Dominican Republic); Conejo, Antonio J.; Morales, Juan M. [Department of Electrical Engineering, Universidad de Castilla-La Mancha, Campus Universitario s/n, 13071 Ciudad Real (Spain)

    2008-05-15

    This paper extends basic economic dispatch analytical results to the reserve-constrained case. For this extended problem, a cost and payment allocation analysis is carried out and a detailed economic interpretation of the results is provided. Sensitivity values (Lagrange multipliers) are also analyzed. A case study is considered to illustrate the proposed analysis. Conclusions are duly drawn. (author)

  13. Adaptive double chain quantum genetic algorithm for constrained optimization problems

    Institute of Scientific and Technical Information of China (English)

    Kong Haipeng; Li Ni; Shen Yuzhong

    2015-01-01

    Optimization problems are often highly constrained and evolutionary algorithms (EAs) are effective methods to tackle this kind of problems. To further improve search efficiency and con-vergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA) for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation) functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE) are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP), adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem.

  14. How Well Can Future CMB Missions Constrain Cosmic Inflation?

    CERN Document Server

    Martin, Jerome; Vennin, Vincent

    2014-01-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from $10^{-1}$ down to $10^{-7}$. We then compute the Bayesian evidences and complexities of all Encyclopaedia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a...

  15. How well can future CMB missions constrain cosmic inflation?

    Science.gov (United States)

    Martin, Jérôme; Ringeval, Christophe; Vennin, Vincent

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10-1 down to 10-7. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.

  16. Inferring meaningful communities from topology-constrained correlation networks.

    Directory of Open Access Journals (Sweden)

    Jose Sergio Hleap

    Full Text Available Community structure detection is an important tool in graph analysis. This can be done, among other ways, by solving for the partition set which optimizes the modularity scores [Formula: see text]. Here it is shown that topological constraints in correlation graphs induce over-fragmentation of community structures. A refinement step to this optimization based on Linear Discriminant Analysis (LDA and a statistical test for significance is proposed. In structured simulation constrained by topology, this novel approach performs better than the optimization of modularity alone. This method was also tested with two empirical datasets: the Roll-Call voting in the 110th US Senate constrained by geographic adjacency, and a biological dataset of 135 protein structures constrained by inter-residue contacts. The former dataset showed sub-structures in the communities that revealed a regional bias in the votes which transcend party affiliations. This is an interesting pattern given that the 110th Legislature was assumed to be a highly polarized government. The [Formula: see text]-amylase catalytic domain dataset (biological dataset was analyzed with and without topological constraints (inter-residue contacts. The results without topological constraints showed differences with the topology constrained one, but the LDA filtering did not change the outcome of the latter. This suggests that the LDA filtering is a robust way to solve the possible over-fragmentation when present, and that this method will not affect the results where there is no evidence of over-fragmentation.

  17. Robust stability in constrained predictive control through the Youla parameterisations

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2011-01-01

    In this article we take advantage of the primary and dual Youla parameterisations to set up a soft constrained model predictive control (MPC) scheme. In this framework it is possible to guarantee stability in face of norm-bounded uncertainties. Under special conditions guarantees are also given...

  18. Bounds on the capacity of constrained two-dimensional codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2000-01-01

    Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run...

  19. Dynamically constrained pipeline for tracking neural progenitor cells

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Dahl, Anders; Holm, Peter;

    2013-01-01

    tracking methods are fundamental building blocks of setting up multi purpose pipelines. Segmentation by discriminative dictionary learning and a graph formulated tracking method constraining the allowed topology changes are combined here to accommodate for highly irregular cell shapes and movement patterns...

  20. Steepest-Ascent Constrained Simultaneous Perturbation for Multiobjective Optimization

    DEFF Research Database (Denmark)

    McClary, Dan; Syrotiuk, Violet; Kulahci, Murat

    2011-01-01

    that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)(2) to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self-organizing wireless network. The results show that SP...

  1. Constrained Local UniversE Simulations: A Local Group Factory

    CERN Document Server

    Carlesi, Edoardo; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I; Pilipenko, Sergey V; Knebe, Alexander; Courtois, Helene; Tully, R Brent; Steinmetz, Matthias

    2016-01-01

    Near field cosmology is practiced by studying the Local Group (LG) and its neighbourhood. The present paper describes a framework for simulating the near field on the computer. Assuming the LCDM model as a prior and applying the Bayesian tools of the Wiener filter (WF) and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the LCDM scenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of halos must obey specific isolation, mass and separation criteria. At the second level the orbital angular momentum and energy are constrained and on the third one the phase of the orbit is constrained. Out of the 300 constrai...

  2. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    Science.gov (United States)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  3. Applications of a Constrained Mechanics Methodology in Economics

    Science.gov (United States)

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  4. Non-rigid registration by geometry-constrained diffusion

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt; Nielsen, Mads

    1999-01-01

    are not given. We will advocate the viewpoint that the aperture and the 3D interpolation problem may be solved simultaneously by finding the simplest displacement field. This is obtained by a geometry-constrained diffusion which yields the simplest displacement field in a precise sense. The point registration...

  5. Constrained control of a once-through boiler with recirculation

    DEFF Research Database (Denmark)

    Trangbæk, K

    2008-01-01

    There is an increasing need to operate power plants at low load for longer periods of time. When a once-through boiler operates at a sufficiently low load, recirculation is introduced, significantly altering the control structure. This paper illustrates the possibilities for using constrained con...

  6. Nonmonotonic Skeptical Consequence Relation in Constrained Default Logic

    Directory of Open Access Journals (Sweden)

    Mihaiela Lupea

    2010-12-01

    Full Text Available This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution. 

  7. Evaluating potentialities and constrains of Problem Based Learning curriculum

    DEFF Research Database (Denmark)

    Guerra, Aida

    2013-01-01

    This paper presents a research design to evaluate Problem Based Learning (PBL) curriculum potentialities and constrains for future changes. PBL literature lacks examples of how to evaluate and analyse established PBL learning environments to address new challenges posed. The research design......) in the curriculum and a mean to choose cases for further case study (third phase)....

  8. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  9. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2012-07-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in some cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while δ15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a single firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to allow quantification of the uncertainties.

  10. Present and Last Glacial Maximum climates as states of maximum entropy production

    CERN Document Server

    Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere

    2011-01-01

    The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...

  11. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  12. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  13. Constraining Type Ia supernova models: SN 2011fe as a test case

    CERN Document Server

    Roepke, F K; Seitenzahl, I R; Pakmor, R; Sim, S A; Taubenberger, S; Ciaraldi-Schoolmann, F; Hillebrandt, W; Aldering, G; Antilogus, P; Baltay, C; Benitez-Herrera, S; Bongard, S; Buton, C; Canto, A; Cellier-Holzem, F; Childress, M; Chotard, N; Copin, Y; Fakhouri, H K; Fink, M; Fouchez, D; Gangler, E; Guy, J; Hachinger, S; Hsiao, E Y; Juncheng, C; Kerschhaggl, M; Kowalski, M; Nugent, P; Paech, K; Pain, R; Pecontal, E; Pereira, R; Perlmutter, S; Rabinowitz, D; Rigault, M; Runge, K; Saunders, C; Smadja, G; Suzuki, N; Tao, C; Thomas, R C; Tilquin, A; Wu, C

    2012-01-01

    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs-realizations of explosion models appropriate for two of the most widely-discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is s...

  14. A Note on k-Limited Maximum Base

    Institute of Scientific and Technical Information of China (English)

    Yang Ruishun; Yang Xiaowei

    2006-01-01

    The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.

  15. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)

    2009-02-19

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic

  16. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  17. The maximum of Brownian motion with parabolic drift (Extended abstract)

    OpenAIRE

    Janson, Svante; Louchard, Guy; Martin-Löf, Anders

    2010-01-01

    We study the maximum of a Brownian motion with a parabolic drift; this is a random variable that often occurs as a limit of the maximum of discrete processes whose expectations have a maximum at an interior point. This has some applications in algorithmic and data structures analysis. We give series expansions and integral formulas for the distribution and the first two moments, together with numerical values to high precision.

  18. An Interval Maximum Entropy Method for Quadratic Programming Problem

    Institute of Scientific and Technical Information of China (English)

    RUI Wen-juan; CAO De-xin; SONG Xie-wu

    2005-01-01

    With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.

  19. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    Directory of Open Access Journals (Sweden)

    Maxim Kramar

    2016-08-01

    Full Text Available Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131 to retrieve and analyze the three-dimensional (3D coronal electron density in the range of heights from $1.5$ to $4 R_odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 AA band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $sim 2.5 R_odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.

  20. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    Science.gov (United States)

    Kramar, Maxim; Airapetian, Vladimir; Lin, Haosheng

    2016-08-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from 1.5 to 4 R_⊙ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 Å band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below ˜ 2.5 R_⊙. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.

  1. 3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum

    CERN Document Server

    Kramar, Maxim; Lin, Haosheng

    2016-01-01

    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from $1.5$ to $4\\ \\mathrm{R}_\\odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 \\AA \\ band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $\\sim 2.5 \\ \\mathrm{R}_\\odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the a...

  2. Broad climatological variation of surface energy balance partitioning across land and ocean predicted from the maximum power limit

    Science.gov (United States)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2016-07-01

    Longwave radiation and turbulent heat fluxes are the mechanisms by which the Earth's surface transfers heat into the atmosphere, thus affecting the surface temperature. However, the energy partitioning between the radiative and turbulent components is poorly constrained by energy and mass balances alone. We use a simple energy balance model with the thermodynamic limit of maximum power as an additional constraint to determine this partitioning. Despite discrepancies over tropical oceans, we find that the broad variation of heat fluxes and surface temperatures in the ERA-Interim reanalyzed observations can be recovered from this approach. The estimates depend considerably on the formulation of longwave radiative transfer, and a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates. Our results suggest that the steady state surface energy partitioning may reflect the maximum power constraint.

  3. Entropy Bounds for Constrained Two-Dimensional Fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Justesen, Jørn

    1999-01-01

    The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived.......The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived....

  4. Matter coupling in partially constrained vielbein formulation of massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Felice, Antonio De [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Gümrükçüoğlu, A. Emir [School of Mathematical Sciences, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); Heisenberg, Lavinia [Institute for Theoretical Studies, ETH Zurich,Clausiusstrasse 47, 8092 Zurich (Switzerland); Mukohyama, Shinji [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Kavli Institute for the Physics and Mathematics of the Universe,Todai Institutes for Advanced Study, University of Tokyo (WPI),5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan)

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  5. Constraining the Charm Yukawa and Higgs-quark Universality

    CERN Document Server

    Perez, Gilad; Stamou, Emmanuel; Tobioka, Kohsaku

    2015-01-01

    We introduce four different types of data-driven analyses with different level of robustness that constrain the size of the Higgs-charm Yukawa coupling: (i) recasting the vector-boson associated, Vh, analyses that search for bottom-pair final state. We use this mode to directly and model independently constrain the Higgs to charm coupling, y_c/y_c^{SM} J/\\psi\\gamma, y_c/y_c^{SM} < 220; (iv) a global fit to the Higgs signal strengths, y_c/y_c^{SM} < 6.2. A comparison with t\\bar{t}h data allows us to show that current data eliminates the possibility that the Higgs couples to quarks in a universal way, as is consistent with the Standard Model (SM) prediction. Finally, we demonstrate how the experimental collaborations can further improve our direct bound by roughly an order of magnitude by charm-tagging, as already used in new physics searches.

  6. A second-generation constrained reaction volume shock tube

    Science.gov (United States)

    Campbell, M. F.; Tulgestke, A. M.; Davidson, D. F.; Hanson, R. K.

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.

  7. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    a number of generalizations of the VRPTW are considered. We discuss complex routing problems with different constraints as well as important real world scheduling problems arising in transportation companies. We show how these problems can be modelled in the same framework as the VRPTW. This means......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  8. Constrained output feedback control of flexible rotor-bearing systems

    Science.gov (United States)

    Kim, Jong-Sun; Lee, Chong-Won

    1990-04-01

    The design of an optimal constrained output feedback controller for a rotor-bearing system is described, based on a reduced order model. The aims are to stabilize the unstable or marginally stable motion and to control the large build-up of periodic disturbances occurring during operation. The reduced order model is constructed on the basis of a modal model and singular perturbation, retaining the advantages of the two methods. The onset of instability due to spillover is prevented by the constrained optimization, and the robustness and pole assignability are improved by designing not merely a static output feedback but a dynamic compensator. The periodic disturbances, usually caused by rotation, are reduced by using the disturbance observer and feed-forward compensation. The efficiency of the proposed method is demonstrated through two simulation models, a rigid shaft supported by soft bearings at its ends and an overhung rotor system with a tip disk, under both transient vibration and sudden imbalance situations.

  9. Moving Forward to Constrain the Shear Viscosity of QCD Matter

    Science.gov (United States)

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-01

    We demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η /s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η /s ≈0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η /s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations in three dimensions and the temperature dependent transport properties of QCD matter.

  10. Moving forward to constrain the shear viscosity of QCD matter

    CERN Document Server

    Denicol, Gabriel; Schenke, Bjoern

    2015-01-01

    We demonstrate that measurements of rapidity differential anisotropic flow in heavy ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio {\\eta}/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from RHIC, we find evidence for a small {\\eta}/s $\\approx$ 0.04 in the QCD cross-over region and a strong temperature dependence in the hadronic phase. A temperature independent {\\eta}/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations in three dimensions and the temperature dependent transport properties of QCD matter.

  11. Applications of a constrained mechanics methodology in economics

    CERN Document Server

    Janová, Jitka

    2011-01-01

    The paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for the undergraduate physics education. The aim of the paper is: 1. to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even on the undergraduate level and 2. to enable the students to understand deeper the principles and methods routinely used in mechanics by looking at the well known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity dependent constraint using the vakonomic approa...

  12. A Constrained Multibody System Dynamics Avoiding Kinematic Singularities

    Science.gov (United States)

    Huang, Chih-Fang; Yan, Chang-Dau; Jeng, Shyr-Long; Cheing, Wei-Hua

    In the analysis of constrained multibody systems, the constraint reaction forces are normally expressed in terms of the constraint equations and a vector of Lagrange multipliers. Because it fails to incorporate conservation of momentum, the Lagrange multiplier method is deficient when the constraint Jacobian matrix is singular. This paper presents an improved dynamic formulation for the constrained multibody system. In our formulation, the kinematic constraints are still formulated in terms of the joint constraint reaction forces and moments; however, the formulations are based on a second-order Taylor expansion so as to incorporate the rigid body velocities. Conservation of momentum is included explicitly in this method; hence the problems caused by kinematic singularities can be avoided. In addition, the dynamic formulation is general and applicable to most dynamic analyses. Finally the 3-leg Stewart platform is used for the example of analysis.

  13. A second-generation constrained reaction volume shock tube.

    Science.gov (United States)

    Campbell, M F; Tulgestke, A M; Davidson, D F; Hanson, R K

    2014-05-01

    We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance. PMID:24880416

  14. Origin of Constrained Maximal CP Violation in Flavor Symmetry

    CERN Document Server

    He, Hong-Jian; Xu, Xun-Jie

    2015-01-01

    Current data from neutrino oscillation experiments are in good agreement with $\\delta=-\\pi/2$ and $\\theta_{23} = \\pi/4$. We define the notion of "constrained maximal CP violation" for these features and study their origin in flavor symmetry models. We give various parametrization-independent definitions of constrained maximal CP violation and present a theorem on how it can be generated. This theorem takes advantage of residual symmetries in the neutrino and charged lepton mass matrices, and states that, up to a few exceptions, $\\delta=\\pm\\pi/2$ and $\\theta_{23} = \\pi/4$ are generated when those symmetries are real. The often considered $\\mu$-$\\tau$ reflection symmetry, as well as specific discrete subgroups of $O(3)$, are special case of our theorem.

  15. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids.

    Science.gov (United States)

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-05-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution. PMID:25948565

  16. Application of constrained aza-valine analogs for Smac mimicry.

    Science.gov (United States)

    Chingle, Ramesh; Ratni, Sara; Claing, Audrey; Lubell, William D

    2016-05-01

    Constrained azapeptides were designed based on the Ala-Val-Pro-Ile sequence from the second mitochondria-derived activator of caspases (Smac) protein and tested for ability to induce apoptosis in cancer cells. Diels-Alder cyclizations and Alder-ene reactions on azopeptides enabled construction of a set of constrained aza-valine dipeptide building blocks, that were introduced into mimics using effective coupling conditions to acylate bulky semicarbazide residues. Evaluation of azapeptides 7-11 in MCF-7 breast cancer cells indicated aza-cyclohexanylglycyine analog 11 induced cell death more efficiently than the parent tetrapeptide likely by a caspase-9 mediated apoptotic pathway. © 2016 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 106: 235-244, 2016.

  17. Matter coupling in partially constrained vielbein formulation of massive gravity

    CERN Document Server

    De Felice, Antonio; Heisenberg, Lavinia; Mukohyama, Shinji

    2015-01-01

    We consider a consistent linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  18. Solving the constrained shortest path problem using random search strategy

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we propose an improved walk search strategy to solve the constrained shortest path problem. The proposed search strategy is a local search algorithm which explores a network by walker navigating through the network. In order to analyze and evaluate the proposed search strategy, we present the results of three computational studies in which the proposed search algorithm is tested. Moreover, we compare the proposed algorithm with the ant colony algorithm and k shortest paths algorithm. The analysis and comparison results demonstrate that the proposed algorithm is an effective tool for solving the constrained shortest path problem. It can not only be used to solve the optimization problem on a larger network, but also is superior to the ant colony algorithm in terms of the solution time and optimal paths.

  19. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  20. Constraining the free parameter of the high parton density effects

    CERN Document Server

    Gay-Ducati, M B; Goncalves, Victor

    2000-01-01

    The high density parton effects are strongly dependent of the spatial gluon distribution within the proton, with radius $R$, which cannot be derived from perturbative QCD. In this paper we assume that the unitarity corrections are present in the HERA kinematical region and constrain the value of $R$ using the data for the proton structure function and its slope. We obtain that the gluons are not distributed uniformly in the whole proton disc, but behave as concentrated in smaller regions.

  1. From global fits of neutrino data to constrained sequential dominance

    CERN Document Server

    Björkeroth, Fredrik

    2014-01-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We perform a global analysis on a class of CSD($n$) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  2. Homework and performance for time-constrained students

    OpenAIRE

    William Neilson

    2005-01-01

    Most studies of homework effectiveness relate time spent on homework to test performance, and find a nonmonotonic relationship. A theoretical model shows that this can occur even when additional homework helps all students because of the way in which variables are defined. However, some students are time-constrained, limiting the amount of homework they can complete. In the presence of time constraints, additional homework can increase the spread between the performance of the best and worst ...

  3. Optimal Constrained Resource Allocation Strategies under Low Risk Circumstances

    OpenAIRE

    Andreica, Mugurel Ionut; Andreica, Madalina; Visan, Costel

    2009-01-01

    The computational geometry problems studied in this paper were inspired by tasks from the International Olympiad in Informatics (some of which were personally attended by the authors). The attached archive contains task descriptions, authors' solutions, as well as some official solutions of the tasks. International audience In this paper we consider multiple constrained resource allocation problems, where the constraints can be specified by formulating activity dependency restrictions o...

  4. Energy Constrained Wireless Sensor Networks : Communication Principles and Sensing Aspects

    OpenAIRE

    Björnemo, Erik

    2009-01-01

    Wireless sensor networks are attractive largely because they need no wired infrastructure. But precisely this feature makes them energy constrained, and the consequences of this hard energy constraint are the overall topic of this thesis. We are in particular concerned with principles for energy efficient wireless communication and the energy-wise trade-off between sensing and radio communication. Radio transmission between sensors incurs both a fixed energy cost from radio circuit processing...

  5. Constraining neutron star tidal Love numbers with gravitational wave detectors

    OpenAIRE

    Flanagan, Eanna E.; Hinderer, Tanja

    2007-01-01

    Ground-based gravitational wave detectors may be able to constrain the nuclear equation of state using the early, low frequency portion of the signal of detected neutron star - neutron star inspirals. In this early adiabatic regime, the influence of a neutron star's internal structure on the phase of the waveform depends only on a single parameter lambda of the star related to its tidal Love number, namely the ratio of the induced quadrupole moment to the perturbing tidal gravitational field....

  6. EXIT-constrained BICM-ID Design using Extended Mapping

    OpenAIRE

    Fukawa, Kisho; Ormsub, Soulisak; Tölli, Antti; Anwar, Khoirul; Matsumoto, Tad

    2012-01-01

    This article proposes a novel design framework, EXIT-constrained binary switching algorithm (EBSA), for achieving near Shannon limit performance with single parity check and irregular repetition coded bit interleaved codedmodulation and iterative detection with extended mapping (SI-BICM-ID-EM). EBSA is composed of node degree allocation optimization using linear programming (LP) and labeling optimization based on adaptive binary switching algorithm jointly. This technique achieves exact match...

  7. Quantum cosmology of a classically constrained nonsingular Universe

    OpenAIRE

    Sanyal, Abhik Kumar

    2009-01-01

    The quantum cosmological version of a nonsingular Universe presented by Mukhanov and Brandenberger in the early nineties has been developed and the Hamilton Jacobi equation has been found under semiclassical (WKB) approximation. It has been pointed out that, parameterization of classical trajectories with semiclassical time parameter, for such a classically constrained system, is a nontrivial task and requires Lagrangian formulation rather than the Hamiltonian formalism.

  8. Reduced order constrained optimization (ROCO): Clinical application to lung IMRT

    OpenAIRE

    Stabenau, Hans; Rivera, Linda; Yorke, Ellen; Yang, Jie; Lu, Renzhi; Richard J. Radke; Jackson, Andrew

    2011-01-01

    Purpose: The authors use reduced-order constrained optimization (ROCO) to create clinically acceptable IMRT plans quickly and automatically for advanced lung cancer patients. Their new ROCO implementation works with the treatment planning system and full dose calculation used at Memorial Sloan-Kettering Cancer Center (MSKCC). The authors have implemented mean dose hard constraints, along with the point-dose and dose-volume constraints that the authors used for our previous work on the prostat...

  9. Dynamical spacetimes and gravitational radiation in a Fully Constrained Formulation

    CERN Document Server

    Cordero-Carrión, Isabel; Ibáñez, José María

    2010-01-01

    This contribution summarizes the recent work carried out to analyze the behavior of the hyperbolic sector of the Fully Constrained Formulation (FCF) derived in Bonazzola et al. 2004. The numerical experiments presented here allows one to be confident in the performances of the upgraded version of CoCoNuT's code by replacing the Conformally Flat Condition (CFC) approximation of the Einstein equations by the FCF.

  10. Constraining a halo model for cosmological neutral hydrogen

    OpenAIRE

    Padmanabhan, Hamsa; Refregier, Alexandre

    2016-01-01

    We describe a combined halo model to constrain the distribution of neutral hydrogen (HI) in the post-reionization universe. We combine constraints from the various probes of HI at different redshifts: the low-redshift 21-cm emission line surveys, intensity mapping experiments at intermediate redshifts, and the Damped Lyman-Alpha (DLA) observations at higher redshifts. We use a Markov Chain Monte Carlo (MCMC) approach to combine the observations and place constraints on the free parameters in ...

  11. Characteristics of Constrained Handwritten Signatures: An Experimental Investigation

    OpenAIRE

    Donato, Impedovo; Pirlo, Giuseppe; Rizzi, Fabrizio

    2015-01-01

    Handwritten signatures are considered one of the most useful biometric traits for personal verification. In the networked society, in which a multitude of different devices can be used for signature acquisition, specific research is still needed to determine the extent to which features of an input signature depend on the characteristics of the signature apposition process. In this paper an experimental investigation was carried out on constrained signatures, which were acquired using writing...

  12. Search for passing-through-walls neutrons constrains hidden braneworlds

    Directory of Open Access Journals (Sweden)

    Michaël Sarrazin

    2016-07-01

    Full Text Available In many theoretical frameworks our visible world is a 3-brane, embedded in a multidimensional bulk, possibly coexisting with hidden braneworlds. Some works have also shown that matter swapping between braneworlds can occur. Here we report the results of an experiment – at the Institut Laue-Langevin (Grenoble, France – designed to detect thermal neutron swapping to and from another braneworld, thus constraining the probability p2 of such an event. The limit, p87 in Planck length units.

  13. Restricted Dynamic Programming Heuristic for Precedence Constrained Bottleneck Generalized TSP

    OpenAIRE

    Salii, Y.

    2015-01-01

    We develop a restricted dynamical programming heuristic for a complicated traveling salesman problem: a) cities are grouped into clusters, resp. Generalized TSP; b) precedence constraints are imposed on the order of visiting the clusters, resp. Precedence Constrained TSP; c) the costs of moving to the next cluster and doing the required job inside one are aggregated in a minimax manner, resp. Bottleneck TSP; d) all the costs may depend on the sequence of previously visited clusters, resp. Seq...

  14. LINEAR SYSTEMS ASSOCIATED WITH NUMERICAL METHODS FOR CONSTRAINED OPITMIZATION

    Institute of Scientific and Technical Information of China (English)

    Y. Yuan

    2003-01-01

    Linear systems associated with numerical methods for constrained optimization arediscussed in this paper. It is shown that the corresponding subproblems arise in most well-known methods, no matter line search methods or trust region methods for constrainedoptimization can be expressed as similar systems of linear equations. All these linearsystems can be viewed as some kinds of approximation to the linear system derived by theLagrange-Newton method. Some properties of these linear systems are analyzed.

  15. Constrained basin stability for studying transient phenomena in dynamical systems

    OpenAIRE

    van Kan, Adrian; Jegminat, Jannes; Donges, Jonathan; Kurths, Jürgen

    2016-01-01

    Transient dynamics are of large interest in many areas of science. Here, a generalization of basin stability (BS) is presented: constrained basin stability (CBS) that is sensitive to various different types of transients arising from finite size perturbations. CBS is applied to the paradigmatic Lorenz system for uncovering nonlinear precursory phenomena of a boundary crisis bifurcation. Further, CBS is used in a model of the Earth's carbon cycle as a return time-dependent stability measure of...

  16. Effect of FIR Fluxes on Constraining Properties of YSOs

    OpenAIRE

    Ha, Ji-Sung; Lee, Jeong-Eun; Jeong, Woong-Seob

    2010-01-01

    Young Stellar Objects (YSOs) in the early evolutionary stages are very embedded, and thus they emit most of their energy at long wavelengths such as far-infrared (FIR) and submillimeter (Submm). Therefore, the FIR observational data are very important to classify the accurate evolutionary stages of these embedded YSOs, and to better constrain their physical parameters in the dust continuum modeling. We selected 28 YSOs, which were detected in the AKARI Far-Infrared Surveyor (FIS), from the Sp...

  17. Experimentally Constrained Molecular Relaxation: The case of hydrogenated amorphous silicon

    OpenAIRE

    Biswas, Parthapratim; Atta-Fynn, Raymond; Drabold, David A.

    2007-01-01

    We have extended our experimentally constrained molecular relaxation technique (P. Biswas {\\it et al}, Phys. Rev. B {\\bf 71} 54204 (2005)) to hydrogenated amorphous silicon: a 540-atom model with 7.4 % hydrogen and a 611-atom model with 22 % hydrogen were constructed. Starting from a random configuration, using physically relevant constraints, {\\it ab initio} interactions and the experimental static structure factor, we construct realistic models of hydrogenated amorphous silicon. Our models ...

  18. Learning Nonrigid Deformations for Constrained Multi-modal Image Registration

    OpenAIRE

    Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon

    2013-01-01

    We present a new strategy to constrain nonrigid registrations of multi-modal images using a low-dimensional statistical deformation model and test this in registering pre-operative and post-operative images from epilepsy patients. For those patients who may undergo surgical resection for treatment, the current gold-standard to identify regions of seizure involves craniotomy and implantation of intracranial electrodes. To guide surgical resection, surgeons utilize pre-op anat...

  19. Tulczyjew triples in the constrained dynamics of strings

    Science.gov (United States)

    Grabowski, J.; Grabowska, K.; Urbański, P.

    2016-09-01

    We show that there exists a natural Tulczyjew triple in the dynamics of objects for which the standard (kinematic) configuration space TM is replaced with ∧n TM . In this framework, which is completely covariant, we derive geometrically phase equations, as well as Euler-Lagrange equations, including nonholonomic constraints into the picture. Dynamics of strings and a constrained Plateau problem in statics are particular cases of this framework.

  20. Revenue Prediction in Budget-constrained Sequential Auctions with Complementarities

    OpenAIRE

    Verwer, Sicco; Zhang, Yingqian

    2011-01-01

    textabstractWhen multiple items are auctioned sequentially, the ordering of auctions plays an important role in the total revenue collected by the auctioneer. This is true especially with budget constrained bidders and the presence of complementarities among items. In such sequential auction settings, it is difficult to develop efficient algorithms for finding an optimal sequence of items that optimizes the revenue of the auctioneer. However, when historical data are available, it is possible...

  1. Nucleosome breathing and remodeling constrain CRISPR-Cas9 function.

    OpenAIRE

    Isaac, RS; Jiang, F; Doudna, JA; Lim, WA; Narlikar, GJ; De Almeida, R

    2016-01-01

    The CRISPR-Cas9 bacterial surveillance system has become a versatile tool for genome editing and gene regulation in eukaryotic cells, yet how CRISPR-Cas9 contends with the barriers presented by eukaryotic chromatin is poorly understood. Here we investigate how the smallest unit of chromatin, a nucleosome, constrains the activity of the CRISPR-Cas9 system. We find that nucleosomes assembled on native DNA sequences are permissive to Cas9 action. However, the accessibility of nucleosomal DNA to ...

  2. A Note on Optimal Care by Wealth-Constrained Injurers

    OpenAIRE

    Thomas J. Miceli; Kathleen Segerson

    2001-01-01

    This paper clarifies the relationship between an injurer's wealth level and his care choice by highlighting the distinction between monetary and non-monetary care. When care is non-monetary, wealth-constrained injurers generally take less than optimal care, and care is increasing in their wealth level under both strict liability and negligence. In contrast, when care is monetary, injurers may take too much or too little care under strict liability, and care is not strictly increasing in injur...

  3. Receding horizon H∞ control for constrained time-delay systems

    Institute of Scientific and Technical Information of China (English)

    Lu Mei; Jin Chengbo; Shao Huihe

    2009-01-01

    A receding horizon H∞ control algorithm is presented for linear discrete time-delay system in the presence of constrained input and disturbances. Disturbance attenuation level is optimized at each time instant, and the receding optimization problem includes several linear matrix inequality constraints. When the convex hull is applied to denote the saturating input, the algorithm has better performance. The numerical example can verify this result.

  4. A Riccati approach for constrained linear quadratic optimal control

    Science.gov (United States)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  5. Effects of voluntary constraining of thoracic displacement during hypercapnia.

    Science.gov (United States)

    Chonan, T; Mulholland, M B; Cherniack, N S; Altose, M D

    1987-11-01

    The study evaluated the interrelationships between the extent of thoracic movements and respiratory chemical drive in shaping the intensity of the sensation of dyspnea. Normal subjects rated their sensations of dyspnea as PCO2 increased during free rebreathing and during rebreathing while ventilation was voluntarily maintained at a constant base-line level. Another trial evaluated the effects on the intensity of dyspnea, of voluntary reduction in the level of ventilation while PCO2 was held constant. During rebreathing, there was a power function relationship between changes in PCO2 and the intensity of dyspnea. At a given PCO2, constraining tidal volume and breathing frequency to the prerebreathing base-line level resulted in an increase in dyspnea. The fractional differences in the intensity of dyspnea between free and constrained rebreathing were independent of PCO2. However, the absolute difference in the intensity of dyspnea between free and constrained rebreathing enlarged with increasing hypercapnia. At PCO2 of 50 Torr, this difference correlated significantly with the increase in both minute ventilation (r = 0.675) and tidal volume (r = 0.757) above the base line during free rebreathing. Similarly, during steady-state hypercapnia at 50 Torr PCO2, the intensity of dyspnea increased progressively as ventilation was voluntarily reduced from the spontaneously adopted free-breathing level. These results indicate that dyspnea increases with the level of respiratory chemical drive but that the intensity of the sensation is further accentuated when ventilation is constrained below that demanded by the level of chemical drive. This may be explained by a loss of inhibitory feedback from lung or chest wall mechanoreceptors acting on brain stem and/or cortical centers.

  6. Risk-Constrained Microgrid Reconfiguration Using Group Sparsity

    OpenAIRE

    Dall'Anese, Emiliano; Giannakis, Georgios B.

    2013-01-01

    The system reconfiguration task is considered for existing power distribution systems and microgrids, in the presence of renewable-based generation and load foresting errors. The system topology is obtained by solving a chance-constrained optimization problem, where loss-of-load (LOL) constraints and Ampacity limits of the distribution lines are enforced. Similar to various distribution system reconfiguration renditions, solving the resultant problem is computationally prohibitive due to the ...

  7. Anti-B-B Mixing Constrains Topcolor-Assisted Technicolor

    Energy Technology Data Exchange (ETDEWEB)

    Burdman, Gustavo; Lane, Kenneth; Rador, Tonguc

    2000-12-06

    We argue that extended technicolor augmented with topcolor requires that all mixing between the third and the first two quark generations resides in the mixing matrix of left-handed down quarks. Then, the anti-B_d--B_d mixing that occurs in topcolor models constrains the coloron and Z' boson masses to be greater than about 5 TeV. This implies fine tuning of the topcolor couplings to better than 1percent.

  8. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids

    OpenAIRE

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-01-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the ...

  9. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  10. New Quasidilaton theory in Partially Constrained Vielbein Formalism

    CERN Document Server

    De Felice, Antonio; Heisenberg, Lavinia; Mukohyama, Shinji; Tanahashi, Norihiro

    2016-01-01

    In this work we study the partially constrained vielbein formulation of the new quasidilaton theory of massive gravity which couples to both physical and fiducial metrics simultaneously via a composite effective metric. This formalism improves the new quasidilaton model since the Boulware-Deser ghost is removed fully non-linearly at all scales. This also yields crucial implications in the cosmological applications. We derive the governing cosmological background evolution and study the stability of the attractor solution.

  11. An Unsplit Godunov Method for Ideal MHD via Constrained Transport

    OpenAIRE

    Gardiner, Thomas A.; Stone, James M

    2005-01-01

    We describe a single step, second-order accurate Godunov scheme for ideal MHD based on combining the piecewise parabolic method (PPM) for performing spatial reconstruction, the corner transport upwind (CTU) method of Colella for multidimensional integration, and the constrained transport (CT) algorithm for preserving the divergence-free constraint on the magnetic field. We adopt the most compact form of CT, which requires the field be represented by area-averages at cell faces. We demonstrate...

  12. Singular divergence instability thresholds of kinematically constrained circulatory systems

    International Nuclear Information System (INIS)

    Static instability or divergence threshold of both potential and circulatory systems with kinematic constraints depends singularly on the constraints' coefficients. Particularly, the critical buckling load of the kinematically constrained Ziegler's pendulum as a function of two coefficients of the constraint is given by the Plücker conoid of degree n=2. This simple mechanical model exhibits a structural instability similar to that responsible for the Velikhov–Chandrasekhar paradox in the theory of magnetorotational instability.

  13. Constrained Dynamic Systems Estimation Based on Adaptive Particle Filter

    OpenAIRE

    Weili Xiong; Mingchen Xue; Baoguo Xu

    2014-01-01

    For the state estimation problem, Bayesian approach provides the most general formulation. However, most existing Bayesian estimators for dynamic systems do not take constraints into account, or rely on specific approximations. Such approximations and ignorance of constraints may reduce the accuracy of estimation. In this paper, a new methodology for the states estimation of constrained systems with nonlinear model and non-Gaussian uncertainty which are commonly encountered in practice is pro...

  14. Search for passing-through-walls neutrons constrains hidden braneworlds

    Science.gov (United States)

    Sarrazin, Michaël; Pignol, Guillaume; Lamblin, Jacob; Pinon, Jonhathan; Méplan, Olivier; Terwagne, Guy; Debarsy, Paul-Louis; Petit, Fabrice; Nesvizhevsky, Valery V.

    2016-07-01

    In many theoretical frameworks our visible world is a 3-brane, embedded in a multidimensional bulk, possibly coexisting with hidden braneworlds. Some works have also shown that matter swapping between braneworlds can occur. Here we report the results of an experiment - at the Institut Laue-Langevin (Grenoble, France) - designed to detect thermal neutron swapping to and from another braneworld, thus constraining the probability p2 of such an event. The limit, p 87 in Planck length units.

  15. Search for passing-through-walls neutrons constrains hidden braneworlds

    CERN Document Server

    Sarrazin, Michael; Lamblin, Jacob; Pinon, Jonhathan; Meplan, Olivier; Terwagne, Guy; Debarsy, Paul-Louis; Petit, Fabrice; Nesvizhevsky, Valery V

    2016-01-01

    In many theoretical frameworks our visible world is a $3$-brane, embedded in a multidimensional bulk, possibly coexisting with hidden braneworlds. Some works have also shown that matter swapping between braneworlds can occur. Here we report the results of an experiment - at the Institut Laue-Langevin (Grenoble, France) - designed to detect thermal neutron swapping to and from another braneworld, thus constraining the probability $p^2$ of such an event. The limit, $p87$ in Planck length units.

  16. Accumulation of stress in constrained assemblies: novel Satoh test configuration

    OpenAIRE

    Shirzadi, A. A.; Bhadeshia, H. K. D. H.

    2010-01-01

    A common test used to study the response of a transforming material to external constraint is due to Satoh and involves the cooling of a rigidly constrained tensile specimen while monitoring the stress that accumulates. Such tests are currently common in the invention of welding alloys which on phase transformation lead to a reduction in residual stresses in the final assembly. The test suffers from the fact that the whole of the tensile specimen is not maintained at a uniform temperature, ma...

  17. Maximizing entropy of image models for 2-D constrained coding

    OpenAIRE

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino; Zamarin, Marco; Ukhanova, Ann

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square const...

  18. Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty

    Directory of Open Access Journals (Sweden)

    Ke-wei Ding

    2014-01-01

    Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.

  19. Constraining the volatile fraction of planets from transit observations

    CERN Document Server

    Alibert, Yann

    2016-01-01

    The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount of volatiles in the planetary interior, and consequently the amount of gas (defined in this paper to be only H and He) that the planet harbors. We show for low-mass gas-poor planets that are located close to their central star that assuming evaporation has efficiently removed the entire gas envelope, it is possible to constrain the volatile fraction of close-in transiting planets. We illustrate this method on the example of 55 Cnc e and show that under the assumption of the absence of gas, the measured mass and radius im...

  20. Performance enhancement for GPS positioning using constrained Kalman filtering

    Science.gov (United States)

    Guo, Fei; Zhang, Xiaohong; Wang, Fuhong

    2015-08-01

    Over the past decades Kalman filtering (KF) algorithms have been extensively investigated and applied in the area of kinematic positioning. In the application of KF in kinematic precise point positioning (PPP), it is often the case where some known functional or theoretical relations exist among the unknown state parameters, which can be and should be made use of to enhance the performance of kinematic PPP, especially in an urban and forest environment. The central task of this paper is to effectively blend the commonly used GNSS data and internal/external additional constrained information to generate an optimal PPP solution. This paper first investigates the basic algorithm of constrained Kalman filtering. Then two types of PPP model with speed constraints and trajectory constraints, respectively, are proposed. Further validation tests based on a variety of situations show that the positioning performances (positioning accuracy, reliability and continuity) from the constrained Kalman filter are significantly superior to those from the conventional Kalman filter, particularly under extremely poor observation conditions.

  1. A Constrained CA Model for Planning Simulation Incorporating Institutional Constraints

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In recent years,it is prevailing to simulate urban growth by means of cellular automata (CA in short) modeling,which is based on selforganizing theories and different from the system dynamic modeling.Since the urban system is definitely complex,the CA models applied in urban growth simulation should take into consideration not only the neighborhood influence,but also other factors influencing urban development.We bring forward the term of complex constrained CA (CC-CA in short) model,which integrates the constrained conditions of neighborhood,macro socio-economy,space and institution.Particularly,the constrained construction zoning,as one institutional constraint,is considered in the CC-CA modeling.In the paper,the conceptual CC-CA model is introduced together with the transition rules.Based on the CC-CA model for Beijing,we discuss the complex constraints to the urban development of,and we show how to set institutional constraints in planning scenario to control the urban growth pattern of Beijing.

  2. Mature Basin Development Portfolio Management in a Resource Constrained Environment

    International Nuclear Information System (INIS)

    Nigerian Petroleum industry is constantly faced with management of resource constraints stemming from capital and operating budget, availability of skilled manpower, capacity of an existing surface facility, size of well assets, amount of soft and hard information, etceteras. Constrained capital forces the industry to rank subsurface resource and potential before proceeding with preparation of development scenarios. Availability of skilled manpower limits scope of integrated reservoir studies. Level of information forces technical and management to find low-risk development alternative in a limited time. Volume of either oil or natural gas or water or combination of them may be constrained due to design limits of the existing facility, or an external OPEC quota, requires high portfolio management skills.The first part of the paper statistically analyses development portfolio of a mature basin for (a) subsurface resources volume, (b) developed and undeveloped and undeveloped volumes, (c) sweating of wells, and (d) facility assets. The analysis presented conclusively demonstrates that the 80/20 is active in the statistical sample. The 80/20 refers to 80% of the effect coming from the 20% of the cause. The second part of the paper deals with how 80/20 could be applied to manage portfolio for a given set of constraints. Three application examples are discussed. Feedback on implementation of them resulting in focussed resource management with handsome rewards is documented.The statistical analysis and application examples from a mature basin form a way forward for a development portfolio management in an resource constrained environment

  3. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  4. Performance enhancement for GPS positioning using constrained Kalman filtering

    International Nuclear Information System (INIS)

    Over the past decades Kalman filtering (KF) algorithms have been extensively investigated and applied in the area of kinematic positioning. In the application of KF in kinematic precise point positioning (PPP), it is often the case where some known functional or theoretical relations exist among the unknown state parameters, which can be and should be made use of to enhance the performance of kinematic PPP, especially in an urban and forest environment. The central task of this paper is to effectively blend the commonly used GNSS data and internal/external additional constrained information to generate an optimal PPP solution. This paper first investigates the basic algorithm of constrained Kalman filtering. Then two types of PPP model with speed constraints and trajectory constraints, respectively, are proposed. Further validation tests based on a variety of situations show that the positioning performances (positioning accuracy, reliability and continuity) from the constrained Kalman filter are significantly superior to those from the conventional Kalman filter, particularly under extremely poor observation conditions. (paper)

  5. Constraining Intracluster Gas Models with AMiBA13

    Science.gov (United States)

    Molnar, Sandor M.; Umetsu, Keiichi; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltán; Hearn, Nathan; Shang, Cien; Ho, Paul T. P.; Locutus Huang, Chih-Wei; Koch, Patrick M.; Liao, Yu-Wei Victor; Lin, Kai-Yang; Liu, Guo-Chin; Nishioka, Hiroaki; Wang, Fu-Cheng; Proty Wu, Jiun-Huei

    2010-11-01

    Clusters of galaxies have been extensively used to determine cosmological parameters. A major difficulty in making the best use of Sunyaev-Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intracluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2 m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal β models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the quality of constraints on the distribution of gas density, and simulated SZ visibilities (AMiBA13 observations) for constraints on the large-scale temperature distribution of the ICG. We find that AMiBA13 visibilities should constrain the scale radius of the temperature distribution to about 50% accuracy. We conclude that the upgraded AMiBA, AMiBA13, should be a powerful instrument to constrain the large-scale distribution of the ICG.

  6. AN ADAPTIVE TRUST REGION METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    ZHANG Juliang; ZHANG Xiangsun; ZHUO Xinjian

    2003-01-01

    In this paper, a trust region method for equality constrained optimization based on nondifferentiable exact penalty is proposed. In this algorithm, the trail step is characterized by computation of its normal component being separated from computation of its tangential component, i.e., only the tangential component of the trail step is constrained by trust radius while the normal component and trail step itself have no constraints. The other main characteristic of the algorithm is the decision of trust region radius. Here, the decision of trust region radius uses the information of the gradient of objective function and reduced Hessian. However, Maratos effect will occur when we use the nondifferentiable exact penalty function as the merit function. In order to obtain the superlinear convergence of the algorithm, we use the twice order correction technique. Because of the speciality of the adaptive trust region method, we use twice order correction when p = 0 (the definition is as in Section 2) and this is different from the traditional trust region methods for equality constrained optimization. So the computation of the algorithm in this paper is reduced. What is more, we can prove that the algorithm is globally and superlinearly convergent.

  7. Lifespan theorem for simples constrained surface diffusion flows

    CERN Document Server

    Wheeler, Glen

    2012-01-01

    We consider closed immersed hypersurfaces in $\\R^3$ and $\\R^4$ evolving by a special class of constrained surface diffusion flows. This class of constrained flows includes the classical surface diffusion flow. In this paper we present a Lifespan Theorem for these flows, which gives a positive lower bound on the time for which a smooth solution exists, and a small upper bound on the total curvature during this time. The hypothesis of the theorem is that the surface is not already singular in terms of concentration of curvature. This turns out to be a deep property of the initial manifold, as the lower bound on maximal time obtained depends precisely upon the concentration of curvature of the initial manifold in $L^2$ for $M^2$ immersed in $R^3$ and additionally on the concentration in $L^3$ for $M^3$ immersed in $R^4$. This is stronger than a previous result on a different class of constrained surface diffusion flows, as here we obtain an improved lower bound on maximal time, a better estimate during this peri...

  8. 49 CFR 174.86 - Maximum allowable operating speed.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...

  9. 78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans

    Science.gov (United States)

    2013-03-04

    ... September 30, 2008 (73 FR 56754-56756). The proposed rule included provisions tying maximum rates to widely... York Prime rate plus 4 percent. The maximums should be the same for all FOs, regardless of size... Review,'' and Executive Order 13563, ``Improving Regulation and Regulatory Review,'' direct agencies...

  10. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except...

  11. The maximum principle for the Navier-Stokes equations

    Science.gov (United States)

    Akysh, Abdigali Sh.

    2016-08-01

    New connections were established between extreme values of the velocity, the density of kinetic energy (in particular local maximum) and the pressure of the Navier-Stokes equations. Validity of the maximum principle was shown for nonlinear Navier-Stokes equations using these connections, that is fundamentally-key from the mathematical point of view.

  12. 33 CFR 155.775 - Maximum cargo level of oil.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Maximum cargo level of oil. 155.775 Section 155.775 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY... Personnel, Procedures, Equipment, and Records § 155.775 Maximum cargo level of oil. (a) For the purposes...

  13. Location of the Termination Shock at Solar Maximum

    OpenAIRE

    Stone, E. C.; Cummings, A. C.

    2002-01-01

    During the recent solar maximum, Voyager 1 was beyond 80 AU. Extrapolation of the small gradients of anomalous cosmic rays at solar minimum and the larger gradients at solar maximum indicate that the solar wind termination shock is at ≲ 92 AU at the beginning of 2002.

  14. A Family of Maximum SNR Filters for Noise Reduction

    DEFF Research Database (Denmark)

    Huang, Gongping; Benesty, Jacob; Long, Tao;

    2014-01-01

    This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...

  15. 30 CFR 57.5039 - Maximum permissible concentration.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings....

  16. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...

  17. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  18. 5 CFR 550.105 - Biweekly maximum earnings limitation.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...

  19. 5 CFR 550.106 - Annual maximum earnings limitation.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...

  20. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  1. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  2. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  3. Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?

    Energy Technology Data Exchange (ETDEWEB)

    Onstott, T. C. [Princeton University; Aubrey, A.D. [Jet Propulsion Laboratory, Pasadena, CA; Kieft, T L [New Mexico Institute of Mining and Technology; Silver, B J [Jet Propulsion Laboratory, Pasadena, CA; Phelps, Tommy Joe [ORNL; Van Heerden, E. [University of the Free State; Opperman, D. J. [University of the Free State; Bada, J L. [Geosciences Research Division, Scripps Instition of Oceanography, Univesity of California San Diego,

    2014-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 C and 1 2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  4. Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?

    Science.gov (United States)

    Onstott, T C; Magnabosco, C; Aubrey, A D; Burton, A S; Dworkin, J P; Elsila, J E; Grunsfeld, S; Cao, B H; Hein, J E; Glavin, D P; Kieft, T L; Silver, B J; Phelps, T J; van Heerden, E; Opperman, D J; Bada, J L

    2014-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 °C and 1-2 years for 3 km depth and 54 °C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 °C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples. PMID:24289240

  5. Does aspartic acid racemization constrain the depth limit of the subsurface biosphere?

    Science.gov (United States)

    Onstott, T C; Magnabosco, C; Aubrey, A D; Burton, A S; Dworkin, J P; Elsila, J E; Grunsfeld, S; Cao, B H; Hein, J E; Glavin, D P; Kieft, T L; Silver, B J; Phelps, T J; van Heerden, E; Opperman, D J; Bada, J L

    2014-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of ~89 years for 1 km depth and 27 °C and 1-2 years for 3 km depth and 54 °C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 °C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  6. Does Aspartic Acid Racemization Constrain the Depth Limit of the Subsurface Biosphere?

    Science.gov (United States)

    Onstott, T C.; Magnabosco, C.; Aubrey, A. D.; Burton, A. S.; Dworkin, J. P.; Elsila, J. E.; Grunsfeld, S.; Cao, B. H.; Hein, J. E.; Glavin, D. P.; Kieft, T. L.; Silver, B. J.; Phelps, T. J.; Heerden, E. Van; Opperman, D. J.; Bada, J. L.

    2013-01-01

    Previous studies of the subsurface biosphere have deduced average cellular doubling times of hundreds to thousands of years based upon geochemical models. We have directly constrained the in situ average cellular protein turnover or doubling times for metabolically active micro-organisms based on cellular amino acid abundances, D/L values of cellular aspartic acid, and the in vivo aspartic acid racemization rate. Application of this method to planktonic microbial communities collected from deep fractures in South Africa yielded maximum cellular amino acid turnover times of approximately 89 years for 1 km depth and 27 C and 1-2 years for 3 km depth and 54 C. The latter turnover times are much shorter than previously estimated cellular turnover times based upon geochemical arguments. The aspartic acid racemization rate at higher temperatures yields cellular protein doubling times that are consistent with the survival times of hyperthermophilic strains and predicts that at temperatures of 85 C, cells must replace proteins every couple of days to maintain enzymatic activity. Such a high maintenance requirement may be the principal limit on the abundance of living micro-organisms in the deep, hot subsurface biosphere, as well as a potential limit on their activity. The measurement of the D/L of aspartic acid in biological samples is a potentially powerful tool for deep, fractured continental and oceanic crustal settings where geochemical models of carbon turnover times are poorly constrained. Experimental observations on the racemization rates of aspartic acid in living thermophiles and hyperthermophiles could test this hypothesis. The development of corrections for cell wall peptides and spores will be required, however, to improve the accuracy of these estimates for environmental samples.

  7. A constrained-gradient method to control divergence errors in numerical MHD

    Science.gov (United States)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  8. Constrained customization of non-coplanar beam orientations in radiotherapy of brain tumours

    Science.gov (United States)

    Rowbottom, Carl Graham; Oldham, Mark; Webb, Steve

    1999-02-01

    A methodology for the constrained customization of non-coplanar beam orientations in radiotherapy treatment planning has been developed and tested on a cohort of five patients with tumours of the brain. The methodology employed a combination of single and multibeam cost functions to produce customized beam orientations. The single-beam cost function was used to reduce the search space for the multibeam cost function, which was minimized using a fast simulated annealing algorithm. The scheme aims to produce well-spaced, customized beam orientations for each patient that produce low dose to organs at risk (OARs). The customized plans were compared with standard plans containing the number and orientation of beams chosen by a human planner. The beam orientation constraint-customized plans employed the same number of treatment beams as the standard plan but with beam orientations chosen by the constrained-customization scheme. Improvements from beam orientation constraint-customization were studied in isolation by customizing the beam weights of both plans using a dose-based downhill simplex algorithm. The results show that beam orientation constraint-customization reduced the maximum dose to the orbits by an average of 18.8 (, 1SD)% and to the optic nerves by 11.4 (, 1SD)% with no degradation of the planning target volume (PTV) dose distribution. The mean doses, averaged over the patient cohort, were reduced by 4.2 (, 1SD)% and 12.4 ( 1SD)% for the orbits and optic nerves respectively. In conclusion, the beam orientation constraint-customization can reduce the dose to OARs, for few-beam treatment plans, when compared with standard treatment plans developed by a human planner.

  9. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  10. Application of the Maximum Entropy Principle in the Analysis of a Non-Equilibrium Chemically Reacting Mixture

    Directory of Open Access Journals (Sweden)

    Hameed Metghalchi

    2005-03-01

    Full Text Available The Maximum Entropy Principle has been used to model complex chemical reaction processes. The maximum entropy principle has been employed by the Rate-Controlled Constrained-Equilibrium (RCCE method to determine concentration of different species during non-equilibrium combustion process. In this model, it is assumed that the system evolves through constrained equilibrium states where entropy of the mixture is maximized subject to constraints. Mixture composition is determined by integrating set of differential equations of constraints rather than integration of differential equations for species as is done with detailed kinetics techniques. Since the number of constraints is much smaller than the number of species present, the number of rate equations required to describe the time evolution of the system is considerably reduced. This method has been used to model the stoichiometric mixture of the formaldehyde-oxygen combustion process. In this study 29 species and 139 reactions has been used, while keeping the energy and volume of the system constant. Calculations have been done at different sets of pressures and temperatures, ranging from 1 atm to 100 atm, and from 900 K to 1500 K respectively. Three fixed elemental constraints: conservation of elemental carbon, elemental oxygen and elemental hydrogen and from one to six variable constraints were used. The four to nine rate equations for the constraint potentials (Lagrange multipliers conjugate to the constraints were integrated and as expected, RCCE calculations gave correct equilibrium values in all cases. Only 8 constraints were required to give very good agreement with detailed calculations. Ignition delay times and major species concentrations were within 0.5% to 5% of the values predicted by detailed chemistry calculations. Adding more constraints improved the accuracy of the mole fractions of minor species at early times, but had only a little effect on the ignition delay times. Rate

  11. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  12. New constraining datasets for Eurasian ice sheet modelling: chronology, fjords and bedrock

    Science.gov (United States)

    Gyllencreutz, R.; Tarasov, L.; Mangerud, J.; Svendsen, J. I.; Lohne, Ø. S.

    2009-04-01

    The increasing resolution of ice sheet models demands more detailed data for constraining and for comparison of results. Important data for this include ice sheet chronology, bed conditions and topography. We address this by compiling published data into three new constraining data sets. The Eurasian ice sheet chronology is reconstructed in our database-GIS solution (called DATED; Gyllencreutz et al., 2007). In DATED, we are building a database with all available dates, and a GIS with all geomorphologic features, that are relevant for the ice configuration through the Last Glacial Maximum and the following deglaciation, based on results from the literature. Reconstructions of the ice sheet configuration are presented as thousand-year time slices of the advance and decay of the Eurasian ice sheet between 25 and 10 thousand calendar years ago, based on chronologic, geomorphologic and stratigraphic data from the literature. To facilitate handling of error estimates in ice sheet modeling using our reconstructions, we made three reconstructions for every time slice: a maximum, a minimum and a "probable" ice sheet configuration, based on the limitations of the data at hand. The estimated uncertainty for the reconstructions was calculated in the GIS, and amounts to about 1 million km2 (about 1/5 of the maximum area) for most of the record before the Younger Dryas, indicating significant gaps in the knowledge of the Eurasian ice sheet configuration. In order to facilitate modeling of fast ice flow and ice streams, we compiled information about exposed bedrock from digital Quaternary maps in scale 1:1 million by the geological surveys in Norway, Sweden, Finland, UK and Ireland, together with published drift thickness estimates. The bed conditions data set was generalized to a grid resolution of 0.25 x 0.25 degrees. The Norwegian fjords are important for topographic steering; especially for fast glacier flow and draw-down from more central parts of the ice sheet. However

  13. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...

  14. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    Science.gov (United States)

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize

  15. Distributing an Exact Algorithm for Maximum Clique: maximising the costup

    OpenAIRE

    McCreesh, Ciaran; Prosser, Patrick

    2012-01-01

    We take an existing implementation of an algorithm for the maximum clique problem and modify it so that we can distribute it over an ad-hoc cluster of machines. Our goal was to achieve a significant speedup in performance with minimal development effort, i.e. a maximum costup. We present a simple modification to a state-of-the-art exact algorithm for maximum clique that allows us to distribute it across many machines. An empirical study over large hard benchmarks shows that speedups of an ord...

  16. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  17. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  18. 21 CFR 888.3790 - Wrist joint metal constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint metal constrained cemented prosthesis... constrained cemented prosthesis. (a) Identification. A wrist joint metal constrained cemented prosthesis is a... as cobalt-chromium-molybdenum, and is limited to those prostheses intended for use with bone...

  19. 21 CFR 888.3520 - Knee joint femorotibial metal/polymer non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ...-constrained cemented prosthesis. 888.3520 Section 888.3520 Food and Drugs FOOD AND DRUG ADMINISTRATION... § 888.3520 Knee joint femorotibial metal/polymer non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer non-constrained cemented prosthesis is a device intended...

  20. 21 CFR 888.3530 - Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ...-constrained cemented prosthesis. 888.3530 Section 888.3530 Food and Drugs FOOD AND DRUG ADMINISTRATION... § 888.3530 Knee joint femorotibial metal/polymer semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/polymer semi-constrained cemented prosthesis is a device...

  1. 21 CFR 888.3500 - Knee joint femorotibial metal/composite semi-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ...-constrained cemented prosthesis. 888.3500 Section 888.3500 Food and Drugs FOOD AND DRUG ADMINISTRATION... § 888.3500 Knee joint femorotibial metal/composite semi-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite semi-constrained cemented prosthesis is a...

  2. 21 CFR 888.3490 - Knee joint femorotibial metal/composite non-constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ...-constrained cemented prosthesis. 888.3490 Section 888.3490 Food and Drugs FOOD AND DRUG ADMINISTRATION... § 888.3490 Knee joint femorotibial metal/composite non-constrained cemented prosthesis. (a) Identification. A knee joint femorotibial metal/composite non-constrained cemented prosthesis is a...

  3. 21 CFR 888.3550 - Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis.

    Science.gov (United States)

    2010-04-01

    ... constrained cemented prosthesis. 888.3550 Section 888.3550 Food and Drugs FOOD AND DRUG ADMINISTRATION... § 888.3550 Knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis. (a) Identification. A knee joint patellofemorotibial polymer/metal/metal constrained cemented prosthesis is a...

  4. Constrained dynamics approach for motion synchronization and consensus

    Science.gov (United States)

    Bhatia, Divya

    In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators

  5. Pattern recognition constrains mantle properties, past and present

    Science.gov (United States)

    Atkins, S.; Rozel, A. B.; Valentine, A. P.; Tackley, P.; Trampert, J.

    2015-12-01

    Understanding and modelling mantle convection requires knowledge of many mantle properties, such as viscosity, chemical structure and thermal proerties such as radiogenic heating rate. However, many of these parameters are only poorly constrained. We demonstrate a new method for inverting present day Earth observations for mantle properties. We use neural networks to represent the posterior probability density functions of many different mantle properties given the present structure of the mantle. We construct these probability density functions by sampling a wide range of possible mantle properties and running forward simulations, using the convection code StagYY. Our approach is particularly powerful because of its flexibility. Our samples are selected in the prior space, rather than being targeted towards a particular observation, as would normally be the case for probabilistic inversion. This means that the same suite of simulations can be used for inversions using a wide range of geophysical observations without the need to resample. Our method is probabilistic and non-linear and is therefore compatible with non-linear convection, avoiding some of the limitations associated with other methods for inverting mantle flow. This allows us to consider the entire history of the mantle. We also need relatively few samples for our inversion, making our approach computationally tractable when considering long periods of mantle history. Using the present thermal and density structure of the mantle, we can constrain rheological and compositional parameters such as viscosity and yield stress. We can also use the present day mantle structure to make inferences about the initial conditions for convection 4.5 Gyr ago. We can constrain initial mantle conditions including the initial concentration of heat producing elements in the mantle and the initial thickness of primordial material at the CMB. Currently we use density and temperature structure for our inversions, but we can

  6. Vibration Suppression Analysis for Supporter with Constrained Layer Damping

    Institute of Scientific and Technical Information of China (English)

    杜华军; 邹振祝; 黄文虎

    2004-01-01

    By analyzing the correlation between modal calculations and modal experiments of a typical supporter, an effective finite element analysis( FEA)model of the actual aerospace supporter is created. According to the analysis of constrained viscoelastic damping, the strategies of PVC have been worked out, and the correlation between modal calculations and modal experiments of the supporter has also been computed, and then, an experiment has been designed based on the calculation results. The results of experiments verify that the PVC strategy can effectively suppress vibration.

  7. The unreasonable effectiveness of experiments in constraining nova nucleosynthesis

    Directory of Open Access Journals (Sweden)

    Parikh Anuj

    2014-01-01

    Full Text Available Classical nova explosions arise from thermonuclear ignition in the envelopes of accreting white dwarfs in close binary star systems. Detailed observations of novae have stimulated numerous studies in theoretical astrophysics and experimental nuclear physics. These phenomena are unusual in nuclear astrophysics because most of the thermonuclear reaction rates thought to be involved are constrained by experimental measurements. This situation allows for rather precise statements to be made about which measurements are still necessary to improve the nuclear physics input to astrophysical models. We briefly discuss desired measurements in these environments with an emphasis on recent experimental progress made to better determine key rates.

  8. Balance of payments constrained growth models: history and overview

    Directory of Open Access Journals (Sweden)

    Anthony P. Thirlwall

    2011-12-01

    Full Text Available Thirlwall’s 1979 balance of payments constrained growth model predicts that a country’s long run growth of GDP can be approximated by the ratio of the growth of real exports to the income elasticity of demand for imports assuming negligible effects from real exchange rate movements. The paper surveys developments of the model since then, allowing for capital flows, interest payments on debt, terms of trade movements, and disaggregation of the model by commodities and trading partners. Various tests of the model are discussed, and an extensive list of papers that have examined the model is presented.

  9. Constrained variational results for the new Bethe homework problem

    International Nuclear Information System (INIS)

    Bethe has proposed two model N-N interactions, one containing a central plus sigma1.sigma2 spin dependence and the other containing in addition a tensor force, to study the convergence of various many-body techniques for calculating the bulk properties of many fermion fluids. Following the success of using constrained variational calculations in describing the behaviour of the original Bethe homework problem involving a purely central interaction, results in neutron matter and nuclear matter for the new spin-dependent potentials, are here presented. (author)

  10. Penalized interior point approach for constrained nonlinear programming

    Institute of Scientific and Technical Information of China (English)

    LU Wen-ting; YAO Yi-rong; ZHANG Lian-sheng

    2009-01-01

    A penalized interior point approach for constrained nonlinear programming is examined in this work. To overcome the difficulty of initialization for the interior point method, a problem equivalent to the primal problem via incorporating an auxiliary variable is constructed. A combined approach of logarithm barrier and quadratic penalty function is proposed to solve the problem. Based on Newton's method, the global convergence of interior point and line search algorithm is proven.Only a finite number of iterations is required to reach an approximate optimal solution. Numerical tests are given to show the effectiveness of the method.

  11. Detection prospects for conformally constrained vector-portal dark matter

    CERN Document Server

    Sage, Frederick S; Dick, Rainer; Steele, T G; Mann, R B

    2016-01-01

    We work with a UV conformal U(1)' extension of the Standard Model, motivated by the hierarchy problem and recent collider anomalies. This model admits fermionic vector portal WIMP dark matter charged under the U(1)' gauge group. The asymptotically safe boundary conditions can be used to fix the coupling parameters, which allows the observed thermal relic abundance to constrain the mass of the dark matter particle. This highly restricts the parameter space, allowing strong predictions to be made. The parameter space of several UV conformal U(1)' scenarios will be explored, and both bounds and possible signals from direct and indirect detection observation methods will be discussed.

  12. Constraining interacting dark energy models with latest cosmological observations

    Science.gov (United States)

    Xia, Dong-Mei; Wang, Sai

    2016-11-01

    The local measurement of H0 is in tension with the prediction of Λ cold dark matter model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on cosmic microwave background, the baryon acoustic oscillation, large-scale structure, supernovae, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  13. Constraining Axion Dark Matter with Big Bang Nucleosynthesis

    OpenAIRE

    Kfir Blum; Raffaele Tito D'Agnolo; Mariangela Lisanti; Benjamin R. Safdi

    2014-01-01

    We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD $\\theta$ angle that redshifts in the early universe, increasing the neutron-proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of 4He during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for time-varying ...

  14. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  15. Constraining interacting dark energy models with latest cosmological observations

    Science.gov (United States)

    Xia, Dong-Mei; Wang, Sai

    2016-08-01

    The local measurement of H0 is in tension with the prediction of ΛCDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  16. Constraining portals with displaced Higgs decay searches at the LHC

    CERN Document Server

    Clarke, Jackson D

    2015-01-01

    It is very easy to write down models in which long-lived particles decaying to standard model states are pair-produced via Higgs decays, resulting in the signature of approximately back-to-back pairs of displaced narrow hadronic jets and/or lepton jets at the LHC. The LHC collaborations have already searched for such signatures with no observed excess. This paper describes a Monte Carlo method to reinterpret the searches. The method relies on (ideally multidimensional) efficiency tables, thus we implore collaborations to include them in any future work. Exclusion regions in mixing-mass parameter space are presented which constrain portal models.

  17. Gauge Conditions for the Constrained-WZNW--Toda Reductions

    CERN Document Server

    Gervais, Jean-Loup; Razumov, A V; Saveliev, M V; Gervais, Jean-Loup; Raifeartaigh, Lochlainn O'; Razumov, Alexander V.; Saveliev, Mikhail V.

    1993-01-01

    There is a constrained-WZNW--Toda theory for any simple Lie algebra equipped with an integral gradation. It is explained how the different approaches to these dynamical systems are related by gauge transformations. Combining Gauss decompositions in relevent gauges, we unify formulae already derived, and explictly determine the holomorphic expansion of the conformally reduced WZNW solutions - whose restriction gives the solutions of the Toda equations. The same takes place also for semi-integral gradations. Most of our conclusions are also applicable to the affine Toda theories.

  18. A REVISED CONJUGATE GRADIENT PROJECTION ALGORITHM FOR INEQUALITY CONSTRAINED OPTIMIZATIONS

    Institute of Scientific and Technical Information of China (English)

    Wei Wang; Lian-sheng Zhang; Yi-fan Xu

    2005-01-01

    A revised conjugate gradient projection method for nonlinear inequality constrained optimization problems is proposed in the paper, since the search direction is the combination of the conjugate projection gradient and the quasi-Newton direction. It has two merits. The one is that the amount of computation is lower because the gradient matrix only needs to be computed one time at each iteration. The other is that the algorithm is of global convergence and locally superlinear convergence without strict complementary condition under some mild assumptions. In addition the search direction is explicit.

  19. Constraining Light-Quark Yukawa Couplings from Higgs Distributions

    CERN Document Server

    Bishara, Fady; Monni, Pier Francesco; Re, Emanuele

    2016-01-01

    We propose a novel strategy to constrain the bottom and charm Yukawa couplings by exploiting LHC measurements of transverse momentum distributions in Higgs production. Our method does not rely on the reconstruction of exclusive final states or heavy-flavour tagging. Compared to other proposals it leads to an enhanced sensitivity to the Yukawa couplings due to distortions of the differential Higgs spectra from emissions which either probe quark loops or are associated to quark-initiated production. We derive constraints using data from LHC Run I, and we explore the prospects of our method at future LHC runs. Finally, we comment on the possibility of bounding the strange Yukawa coupling.

  20. Constraining interacting dark energy models with latest cosmological observations

    CERN Document Server

    Xia, Dong-Mei

    2016-01-01

    The local measurement of $H_0$ is in tension with the prediction of $\\Lambda$CDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, $H(z)$ and $H_0$ to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The $H_0$ tension can be moderately alleviated, but not totally released.