An articulatorily constrained, maximum entropy approach to speech recognition and speech coding
Hogden, J.
1996-12-31
Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Exploring the Constrained Maximum Edge-weight Connected Graph Problem
Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen
2009-01-01
Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Andrea Trucco
2015-06-01
Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.
Constrained maximum likelihood modal parameter identification applied to structural dynamics
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)
Modeling words with subword units in an articulatorily constrained speech recognition algorithm
Hogden, J.
1997-11-20
The goal of speech recognition is to find the most probable word given the acoustic evidence, i.e. a string of VQ codes or acoustic features. Speech recognition algorithms typically take advantage of the fact that the probability of a word, given a sequence of VQ codes, can be calculated.
Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude
1999-12-01
We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).
Abramov, Rafail
2006-01-01
The maximum entropy principle is a versatile tool for evaluating smooth approximations of probability density functions with a least bias beyond given constraints. In particular, the moment-based constraints are often a common prior information about a statistical state in various areas of science, including that of a forecast ensemble or a climate in atmospheric science. With that in mind, here we present a unified computational framework for an arbitrary number of phase space dimensions and moment constraints for both Shannon and relative entropies, together with a practical usable convex optimization algorithm based on the Newton method with an additional preconditioning and robust numerical integration routine. This optimization algorithm has already been used in three studies of predictability, and so far was found to be capable of producing reliable results in one- and two-dimensional phase spaces with moment constraints of up to order 4. The current work extensively references those earlier studies as practical examples of the applicability of the algorithm developed below.
Caley, T.; Roche, D. M.; Waelbroeck, C.; Michel, E.
2014-01-01
We use the fully coupled atmosphere-ocean three-dimensional model of intermediate complexity iLOVECLIM to simulate the climate and oxygen stable isotopic signal during the Last Glacial Maximum (LGM, 21 000 yr). By using a model that is able to explicitly simulate the sensor (δ18O), results can be directly compared with data from climatic archives in the different realms. Our results indicate that iLOVECLIM reproduces well the main feature of the LGM climate in the atmospheric and oceanic components. The annual mean δ18O in precipitation shows more depleted values in the northern and southern high latitudes during the LGM. The model reproduces very well the spatial gradient observed in ice core records over the Greenland ice-sheet. We observe a general pattern toward more enriched values for continental calcite δ18O in the model at the LGM, in agreement with speleothem data. This can be explained by both a general atmospheric cooling in the tropical and subtropical regions and a reduction in precipitation as confirmed by reconstruction derived from pollens and plant macrofossils. Data-model comparison for sea surface temperature indicates that iLOVECLIM is capable to satisfyingly simulate the change in oceanic surface conditions between the LGM and present. Our data-model comparison for calcite δ18O allows investigating the large discrepancies with respect to glacial temperatures recorded by different microfossil proxies in the North Atlantic region. The results argue for a trong mean annual cooling between the LGM and present (> 6°C), supporting the foraminifera transfer function reconstruction but in disagreement with alkenones and dinocyst reconstructions. The data-model comparison also reveals that large positive calcite δ18O anomaly in the Southern Ocean may be explained by an important cooling, although the driver of this pattern is unclear. We deduce a large positive δ18Osw anomaly for the north Indian Ocean that contrasts with a large negative δ18Osw
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. PMID:24889372
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)
The large wormhole problem in Coleman's theory of the cosmological constant is presented in the framework of constrained wormholes. We use semi-classical methods, similar to those used to study constrained instantons in quantum field theory. A scalar field theory serves as a toy model to analyze the problems associated with large constrained instantons. In particular, these large instantons are found to suffer from large quantum fluctuations. In gravity we find the same situation: large quantum fluctuations around large wormholes. In both cases we expect that these large fluctuations are a signal that large constrained solutions are not important in the path integral. Thus, we argue that only small wormholes are important in Coleman's theory. (orig.)
Constrained noninformative priors
The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Power-constrained supercomputing
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Constraining Galileon inflation
Regan, Donough; Anderson, Gemma J.; Hull, Matthew; Seery, David, E-mail: D.Regan@sussex.ac.uk, E-mail: G.Anderson@sussex.ac.uk, E-mail: Matthew.Hull@port.ac.uk, E-mail: D.Seery@sussex.ac.uk [Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QH (United Kingdom)
2015-02-01
In this short paper, we present constraints on the Galileon inflationary model from the CMB bispectrum. We employ a principal-component analysis of the independent degrees of freedom constrained by data and apply this to the WMAP 9-year data to constrain the free parameters of the model. A simple Bayesian comparison establishes that support for the Galileon model from bispectrum data is at best weak.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.;
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an...
Constrained Jastrow calculations
An alternative to Pandharipande's lowest order constrained variational prescription for dense Fermi fluids is presented which is justified on both physical and strict variational grounds. Excellent results are obtained when applied to the 'homework problem' of Bethe, in sharp contrast to those obtained from the Pandharipande prescription. (Auth.)
Constrained Canonical Correlation.
DeSarbo, Wayne S.; And Others
1982-01-01
A variety of problems associated with the interpretation of traditional canonical correlation are discussed. A response surface approach is developed which allows for investigation of changes in the coefficients while maintaining an optimum canonical correlation value. Also, a discrete or constrained canonical correlation method is presented. (JKS)
Constrained superfields in Supergravity
Dall'Agata, Gianguido
2015-01-01
We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.
Sharp spatially constrained inversion
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.; Kirkegaard, Casper C.; Auken, Esben
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted by...... using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes this by......, the results are compatible with the data and, at the same time, favor sharp transitions. The focusing strategy can also be used to constrain the 1D solutions laterally, guaranteeing that lateral sharp transitions are retrieved without losing resolution. By means of real and synthetic datasets, sharp...
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some
Halaby, Mohamed El; Abdalla, Areeg
2016-01-01
In this paper, we extend the Maximum Satisfiability (MaxSAT) problem to {\\L}ukasiewicz logic. The MaxSAT problem for a set of formulae {\\Phi} is the problem of finding an assignment to the variables in {\\Phi} that satisfies the maximum number of formulae. Three possible solutions (encodings) are proposed to the new problem: (1) Disjunctive Linear Relations (DLRs), (2) Mixed Integer Linear Programming (MILP) and (3) Weighted Constraint Satisfaction Problem (WCSP). Like its Boolean counterpart,...
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
Maximum abundant isotopes correlation
The neutron excess of the most abundant isotopes of the element shows an overall linear dependence upon the neutron number for nuclei between neutron closed shells. This maximum abundant isotopes correlation supports the arguments for a common history of the elements during nucleosynthesis. (Auth.)
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Constraining neutrinoless double beta decay
A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Shrinkage Effect in Ancestral Maximum Likelihood
Mossel, Elchanan; Steel, Mike
2008-01-01
Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Bieber, J W; Engel, R; Gaisser, T K; Roesler, S; Stanev, T; Bieber, John W.; Engel, Ralph; Gaisser, Thomas K.; Roesler, Stefan; Stanev, Todor
1999-01-01
New measurements with good statistics will make it possible to observe the time variation of cosmic antiprotons at 1 AU through the approaching peak of solar activity. We report a new computation of the interstellar antiproton spectrum expected from collisions between cosmic protons and the interstellar gas. This spectrum is then used as input to a steady-state drift model of solar modulation, in order to provide predictions for the antiproton spectrum as well as the antiproton/proton ratio at 1 AU. Our model predicts a surprisingly large, rapid increase in the antiproton/proton ratio through the next solar maximum, followed by a large excursion in the ratio during the following decade.
Lightweight cryptography for constrained devices
Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco
2014-01-01
Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....
Pires, Bernardo Esteves
2010-01-01
The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.
Numerical PDE-constrained optimization
De los Reyes, Juan Carlos
2015-01-01
This book introduces, in an accessible way, the basic elements of Numerical PDE-Constrained Optimization, from the derivation of optimality conditions to the design of solution algorithms. Numerical optimization methods in function-spaces and their application to PDE-constrained problems are carefully presented. The developed results are illustrated with several examples, including linear and nonlinear ones. In addition, MATLAB codes, for representative problems, are included. Furthermore, recent results in the emerging field of nonsmooth numerical PDE constrained optimization are also covered. The book provides an overview on the derivation of optimality conditions and on some solution algorithms for problems involving bound constraints, state-constraints, sparse cost functionals and variational inequality constraints.
Bagging constrained equity premium predictors
Hillebrand, Eric; Lee, Tae-Hwy; Medeiros, Marcelo
2014-01-01
regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock...
The Constrained Bottleneck Transportation Problem
Peerayuth Charnsethikul; Saeree Svetasreni
2007-01-01
Two classes of the bottleneck transportation problem with an additional budget constraint are introduced. An exact approach was proposed to solve both problem classes with proofs of correctness and complexity. Moreover, the approach was extended to solve a class of multi-commodity transportation network with a special case of the multi-period constrained bottleneck assignment problem.
Constrained Clustering With Imperfect Oracles.
Zhu, Xiatian; Loy, Chen Change; Gong, Shaogang
2016-06-01
While clustering is usually an unsupervised operation, there are circumstances where we have access to prior belief that pairs of samples should (or should not) be assigned with the same cluster. Constrained clustering aims to exploit this prior belief as constraint (or weak supervision) to influence the cluster formation so as to obtain a data structure more closely resembling human perception. Two important issues remain open: 1) how to exploit sparse constraints effectively and 2) how to handle ill-conditioned/noisy constraints generated by imperfect oracles. In this paper, we present a novel pairwise similarity measure framework to address the above issues. Specifically, in contrast to existing constrained clustering approaches that blindly rely on all features for constraint propagation, our approach searches for neighborhoods driven by discriminative feature selection for more effective constraint diffusion. Crucially, we formulate a novel approach to handling the noisy constraint problem, which has been unrealistically ignored in the constrained clustering literature. Extensive comparative results show that our method is superior to the state-of-the-art constrained clustering approaches and can generally benefit existing pairwise similarity-based data clustering algorithms, such as spectral clustering and affinity propagation. PMID:25622327
Constrained Graph Optimization: Interdiction and Preservation Problems
Schild, Aaron V [Los Alamos National Laboratory
2012-07-30
The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.
Constrained Multiobjective Biogeography Optimization Algorithm
Hongwei Mo
2014-01-01
Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Trends in PDE constrained optimization
Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan
2014-01-01
Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics. The book is divided into five sections on “Constrained Optimization, Identification and Control”...
Constrained ballistics and geometrical optics
Epstein, Marcelo
2014-01-01
The problem of constant-speed ballistics is studied under the umbrella of non-linear non-holonomic constrained systems. The Newtonian approach is shown to be equivalent to the use of Chetaev's rule to incorporate the constraint within the initially unconstrained formulation. Although the resulting equations are not, in principle, obtained from a variational statement, it is shown that the trajectories coincide with those of geometrical optics in a medium with a suitably chosen refractive inde...
Bagging Constrained Equity Premium Predictors
Tae-Hwy Lee; Eric Hillebrand; Marcelo Medeiros
2013-01-01
The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. ...
Enumeration of Maximum Acyclic Hypergraphs
Jian-fang Wang; Hai-zhu Li
2002-01-01
Acyclic hypergraphs are analogues of forests in graphs. They are very useful in the design of databases. In this article, the maximum size of an acyclic hypergraph is determined and the number of maximum r-uniform acyclic hypergraphs of order n is shown to be ( n t-1 )(n(r-1)-r2 +2r)n-r-1.
Image compression using constrained relaxation
He, Zhihai
2007-01-01
In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.
Constraining Lorentz violation with cosmology.
Zuntz, J A; Ferreira, P G; Zlosnik, T G
2008-12-31
The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities. PMID:19113765
Compositions constrained by graph Laplacian minors
Braun, Benjamin; Harrison, Ashley; McKim, Jessica; Noll, Jenna; Taylor, Clifford
2012-01-01
Motivated by examples of symmetrically constrained compositions, super convex partitions, and super convex compositions, we initiate the study of partitions and compositions constrained by graph Laplacian minors. We provide a complete description of the multivariate generating functions for such compositions in the case of trees. We answer a question due to Corteel, Savage, and Wilf regarding super convex compositions, which we describe as compositions constrained by Laplacian minors for cycles; we extend this solution to the study of compositions constrained by Laplacian minors of leafed cycles. Connections are established and conjectured between compositions constrained by Laplacian minors of leafed cycles of prime length and algebraic/combinatorial properties of reflexive simplices.
Quantum Annealing for Constrained Optimization
Hen, Itay; Spedalieri, Federico M.
2016-03-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that promise to solve certain combinatorial optimization problems of practical relevance faster than their classical analogues. The applicability of such devices for many theoretical and real-world optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which, in turn, requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here, we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby reduces the number of required couplers and removes the need for minor embedding, greatly reducing the number of required physical qubits. We argue the advantages of the proposed technique and illustrate its effectiveness. We conclude by discussing the experimental feasibility of the suggested method as well as its potential to appreciably reduce the resource requirements for implementing optimization problems on quantum annealers and its significance in the field of quantum computing.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Bounds on the Capacity of Weakly constrained two-dimensional Codes
Forchhammer, Søren
2002-01-01
Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds on the...... capacity for 2-D channel models based on occurrences of neighboring 1s are considered....
Time efficient spacecraft maneuver using constrained torque distribution
Cao, Xibin; Yue, Chengfei; Liu, Ming; Wu, Baolin
2016-06-01
This paper investigates the time efficient maneuver of rigid satellites with inertia uncertainty and bounded external disturbance. A redundant cluster of four reaction wheels is used to control the spacecraft. To make full use of the controllability and avoid frequent unload for reaction wheels, a maximum output torque and maximum angular momentum constrained torque distribution method is developed. Based on this distribution approach, the maximum allowable acceleration and velocity of the satellite are optimized during the maneuvering. A novel braking curve is designed on the basis of the optimization strategy of the control torque distribution. A quaternion-based sliding mode control law is proposed to render the state to track the braking curve strictly. The designed controller provides smooth control torque, time efficiency and high control precision. Finally, practical numerical examples are illustrated to show the effectiveness of the developed torque distribution strategy and control methodology.
Constraining Cosmic Evolution of Type Ia Supernovae
Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.
2008-02-13
We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.
iBGP and Constrained Connectivity
Dinitz, Michael
2011-01-01
We initiate the theoretical study of the problem of minimizing the size of an iBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than $\\Omega(\\log n)$ and provide approximation algorithms with ratio $\\tilde{O}(\\sqrt{n})$. In addition, we give a slightly worse $\\tilde{O}(n^{2/3})$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems. The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph $G=(V,E)$, and for every pair of vertices $u,v \\in V$ we are given a set $S(u,...
Generalized Maximum Entropy Estimation of Discrete Sequential Move Games of Perfect Information
Wang, Yafeng; Graham, Brett
2013-01-01
We propose a data-constrained generalized maximum entropy (GME) estimator for discrete sequential move games of perfect information which can be easily implemented on optimization software with high-level interfaces such as GAMS. Unlike most other work on the estimation of complete information games, the method we proposed is data constrained and does not require simulation and normal distribution of random preference shocks. We formulate the GME estimation as a (convex) mixed-integer nonline...
Maximum magnitude in the Lower Rhine Graben
Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry
2014-05-01
Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the
Constrained correlation dynamics of QCD
The complete version of constrained correlation dynamics of SU(N) gauge theories in temporal gauge and canonical form has been formulated in three steps. (1) With the aid of generating-functional technique and in the framework of correlation dynamics, a closed set of equations of motion for correlation Green's functions have been established. (2) Gauge constraint conditions are analysed by means of Dirac theory. The algebraic representations of Gauss law and Ward identities are given. In accordance with the truncation approximations of correlation dynamics, the conserved Gauss law and Ward identities due to residual gauge invariance are shifted to initial value problems. (3) The equations of motion for multi-time correlation Green's functions have been transformed into those for equal-time correlation Green's functions. In two-body truncation approximation, a tractable set of equations of motion, Gauss law, and Ward identities are given explicitly
Constrained Allocation Flux Balance Analysis
Mori, Matteo; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo
2016-01-01
New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferr...
Formal language constrained path problems
Barrett, C.; Jacob, R.; Marathe, M.
1997-07-08
In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.
Constraining Modified Gravity Theories With Cosmology
Martinelli, Matteo
2012-01-01
We study and constrain the Hu and Sawicki f(R) model using CMB and weak lensing forecasted data. We also use the same data to constrain extended theories of gravity and the subclass of f(R) theories using a general parameterization describing departures from General Relativity. Moreover we study and constrain also a Dark Coupling model where Dark Energy and Dark Matter are coupled toghether.
Space-Constrained Interval Selection
Emek, Yuval; Halldorsson, Magnus M.; Rosen, Adi
2012-01-01
We study streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A deterministic 2-approximation streaming algorithm for this problem is developed, together with an algorithm for the special case of proper intervals, achieving improved approximation ratio of 3/2. We complement these upper bounds by proving that they are essentially best possible in the streaming setting: it is shown that an approximation ratio of $2 - \\e...
Maximum entropy beam diagnostic tomography
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system
Decomposition using Maximum Autocorrelation Factors
Larsen, Rasmus
2002-01-01
, normally we have an ordering of landmarks (variables) along the contour of the objects. For the case with observation ordering the maximum autocorrelation factor (MAF) transform was proposed for multivariate imagery in\\verb+~+\\$\\backslash\\$cite{switzer85}. This corresponds to a R-mode analyse of the data...
Maximizing entropy of image models for 2-D constrained coding
Forchhammer, Søren; Danieli, Matteo; Burini, Nino;
2010-01-01
This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... of the Markov random field defined by the 2-D constraint is estimated to be (upper bounded by) 0.8570 bits/symbol using the iterative technique of Belief Propagation on 2 £ 2 finite lattices. Based on combinatorial bounding techniques the maximum entropy for the constraint was determined to be 0.848....
Positive Scattering Cross Sections using Constrained Least Squares
A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented
Positive Scattering Cross Sections using Constrained Least Squares
Dahl, J.A.; Ganapol, B.D.; Morel, J.E.
1999-09-27
A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.
Constrained Allocation Flux Balance Analysis
Mori, Matteo; Hwa, Terence; Martin, Olivier C.
2016-01-01
New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325
Gyrification from constrained cortical expansion
Tallinen, Tuomas; Biggins, John S; Mahadevan, L
2015-01-01
The exterior of the mammalian brain - the cerebral cortex - has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highl...
Maximum Power Point Regulator System
Simola, J.; Savela, K.; Stenberg, J.; Tonicello, F.
2011-10-01
The target of the study done under the ESA contract No.17830/04/NL/EC (GSTP4) for Maximum Power Point Regulator System (MPPRS) was to investigate, design and test a modular power system (a core PCU) fulfilling requirement for maximum power transfer even after a single failure in the Power System by utilising a power concept without any potential and credible single point failure. The studied MPPRS concept is of a modular construction, able to track the MPP individually on each SA sections, maintaining its functionality and full power capability after a loss of a complete MPPR module (by utilizingN+1module).Various add-on DCDC converter topology candidates were investigated and redundancy, failure mechanisms and protection aspects were studied
Maximum matching on random graphs
Zhou, Haijun; Ou-Yang, Zhong-Can
2003-01-01
The maximum matching problem on random graphs is studied analytically by the cavity method of statistical physics. When the average vertex degree \\mth{c} is larger than \\mth{2.7183}, groups of max-matching patterns which differ greatly from each other {\\em gradually} emerge. An analytical expression for the max-matching size is also obtained, which agrees well with computer simulations. Discussion is made on this {\\em continuous} glassy phase transition and the absence of such a glassy phase ...
Maximum-likelihood absorption tomography
Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Homogeneous determination of maximum magnitude
Meletti, C.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; D'Amico, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia; Martinelli, F.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Milano-Pavia, Milano, Italia
2010-01-01
This deliverable represents the result of the activities performed by a working group at INGV. The main object of the Task 3.5 is defined in the Description of Work. This task will produce a homogeneous assessment (possibly multiple models) of the distribution of the expected Maximum Magnitude for earthquakes expected in various tectonic provinces of Europe, to serve as input for the computation and validation of seismic hazard. This goal will be achieved by combining input from earthqu...
Indistinguishability, symmetrisation and maximum entropy
It is demonstrated that the distributions over single-particle states for Boltzmann, Bose-Einstein and Fermi-Dirac statistics describing N non-interacting identical particles follow directly from the principle of maximum entropy. It is seen that the notions of indistinguishability and coarse graining are secondary, if not irrelevant. A detailed examination of the structure of the Boltzmann limit is provided. (author)
Solar maximum: solar array degradation
The 5-year in-orbit power degradation of the silicon solar array aboard the Solar Maximum Satellite was evaluated. This was the first spacecraft to use Teflon R FEP as a coverglass adhesive, thus avoiding the necessity of an ultraviolet filter. The peak power tracking mode of the power regulator unit was employed to ensure consistent maximum power comparisons. Telemetry was normalized to account for the effects of illumination intensity, charged particle irradiation dosage, and solar array temperature. Reference conditions of 1.0 solar constant at air mass zero and 301 K (28 C) were used as a basis for normalization. Beginning-of-life array power was 2230 watts. Currently, the array output is 1830 watts. This corresponds to a 16 percent loss in array performance over 5 years. Comparison of Solar Maximum Telemetry and predicted power levels indicate that array output is 2 percent less than predictions based on an annual 1.0 MeV equivalent election fluence of 2.34 x ten to the 13th power square centimeters space environment
Groundwater availability as constrained by hydrogeology and environmental flows
Watson, Katelyn A.; Mayer, Alex S.; Reeves, Howard W.
2014-01-01
Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based on......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus in...
Constrained Deformable-Layer Tomography
Zhou, H.
2006-12-01
The improvement on traveltime tomography depends on improving data coverage and tomographic methodology. The data coverage depends on the spatial distribution of sources and stations, as well as the extent of lateral velocity variation that may alter the raypaths locally. A reliable tomographic image requires large enough ray hit count and wide enough angular range between traversing rays over the targeted anomalies. Recent years have witnessed the advancement of traveltime tomography in two aspects. One is the use of finite frequency kernels, and the other is the improvement on model parameterization, particularly that allows the use of a priori constraints. A new way of model parameterization is the deformable-layer tomography (DLT), which directly inverts for the geometry of velocity interfaces by varying the depths of grid points to achieve a best traveltime fit. In contrast, conventional grid or cell tomography seeks to determine velocity values of a mesh of fixed-in-space grids or cells. In this study, the DLT is used to map crustal P-wave velocities with first arrival data from local earthquakes and two LARSE active surveys in southern California. The DLT solutions along three profiles are constrained using known depth ranges of the Moho discontinuity at 21 sites from a previous receiver function study. The DLT solutions are generally well resolved according to restoration resolution tests. The patterns of 2D DLT models of different profiles match well at their intersection locations. In comparison with existing 3D cell tomography models in southern California, the new DLT models significantly improve the data fitness. In comparison with the multi-scale cell tomography conducted for the same data, while the data fitting levels of the DLT and the multi-scale cell tomography models are compatible, the DLT provides much higher vertical resolution and more realistic description of the undulation of velocity discontinuities. The constraints on the Moho depth
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Biswas, Md. Haider Ali; de Pinho, Maria do Rosario
2013-01-01
Here we derive a nonsmooth maximum principle for optimal control problems with both state and mixed constraints. Crucial to our development is a convexity assumption on the "velocity set". The approach consists of applying known penalization techniques for state constraints together with recent results for mixed constrained problems.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Asymptotic Likelihood Distribution for Correlated & Constrained Systems
Agarwal, Ujjwal
2016-01-01
It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.
Constrained Bimatrix Games in Wireless Communications
Firouzbakht, Koorosh; Noubir, Guevara; Salehi, Masoud
2015-01-01
We develop a constrained bimatrix game framework that can be used to model many practical problems in many disciplines, including jamming in packetized wireless networks. In contrast to the widely used zero-sum framework, in bimatrix games it is no longer required that the sum of the players' utilities to be zero or constant, thus, can be used to model a much larger class of jamming problems. Additionally, in contrast to the standard bimatrix games, in constrained bimatrix games the players' ...
Constrained school choice : an experimental study
Calsamiglia, Caterina; Haeringer, Guillaume; Klijn, Flip
2008-01-01
The literature on school choice assumes that families can submit a preference list over all the schools they want to be assigned to. However, in many real-life instances families are only allowed to submit a list containing a limited number of schools. Subjects' incentives are drastically affected, as more individuals manipulate their preferences. Including a safety school in the constrained list explains most manipulations. Competitiveness across schools plays an important role. Constraining...
Constraining pion interactions at very high energies by cosmic ray data
Ostapchenko, Sergey
2016-01-01
We demonstrate that a substantial part of the present uncertainties in model predictions for the average maximum depth of cosmic ray-induced extensive air showers is related to very high energy pion-air collisions. Our analysis shows that the position of the maximum of the muon production profile in air showers is strongly sensitive to the properties of such interactions. Therefore, the measurements of the maximal muon production depth by cosmic ray experiments provide a unique opportunity to constrain the treatment of pion-air interactions at very high energies and to reduce thereby model-related uncertainties for the shower maximum depth.
Constraining pion interactions at very high energies by cosmic ray data
Ostapchenko, Sergey; Bleicher, Marcus
2016-03-01
We demonstrate that a substantial part of the present uncertainties in model predictions for the average maximum depth of cosmic ray-induced extensive air showers is related to very high energy pion-air collisions. Our analysis shows that the position of the maximum of the muon production profile in air showers is strongly sensitive to the properties of such interactions. Therefore, the measurements of the maximal muon production depth by cosmic ray experiments provide a unique opportunity to constrain the treatment of pion-air interactions at very high energies and to reduce thereby model-related uncertainties for the shower maximum depth.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
The maximum drag reduction asymptote
Choueiri, George H.; Hof, Bjorn
2015-11-01
Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].
Maximum entropy principal for transportation
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Forecasting Maximum Demand And Loadshedding
Dhabai Poonam. B
2014-05-01
Full Text Available The intention of this paper is to priorly estimate the maximum demand (MD during the running slots. The forecasting of MD will help us to save the extra bill charged. The MD is calculated by two methods basically : graphically and mathematically. It will help us to control the total demand, and reduce the effective cost. With help of forecasting MD, we can even perform load shedding if our MD will be exceeding the contract demand (CD. Load shedding is performed as per the load requirement. After load shedding, the MD can be brought under control and hence we can avoid the extra charges which are to be paid under the conditions if our MD exceeds the CD. This scheme is being implemented in various industries. For forecasting the MD we have to consider various zones as: load flow analysis, relay safe operating area (SOA, ratings of the equipments installed, etc. The estimation of MD and load shedding (LS can be also done through automated process such as programming in PLC’s. The automated system is very much required in the industrial zones. This saves the valuable time, as well as the labor work required. The PLC and SCADA software helps a lot in automation technique. To calculate the MD the ratings of each and every equipment installed in the premises is considered. The estimation of MD and LS program will avoid the industries from paying the huge penalties for the electricity companies. This leads to the bright future scope of this concept in the rapid industrialization sector, energy sectors.
A constrained two-layer compression technique for ECG waves.
Byun, Kyungguen; Song, Eunwoo; Shim, Hwan; Lim, Hyungjoon; Kang, Hong-Goo
2015-08-01
This paper proposes a constrained two-layer compression technique for electrocardiogram (ECG) waves, of which encoded parameters can be directly used for the diagnosis of arrhythmia. In the first layer, a single ECG beat is represented by one of the registered templates in the codebook. Since the required coding parameter in this layer is only the codebook index of the selected template, its compression ratio (CR) is very high. Note that the distribution of registered templates is also related to the characteristics of ECG waves, thus it can be used as a metric to detect various types of arrhythmias. The residual error between the input and the selected template is encoded by a wavelet-based transform coding in the second layer. The number of wavelet coefficients is constrained by pre-defined maximum distortion to be allowed. The MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. The proposed algorithm shows around 7.18 CR when the reference value of percentage root mean square difference (PRD) is set to ten. PMID:26737691
Hybrid Biogeography Based Optimization for Constrained Numerical and Engineering Optimization
Zengqiang Mi
2015-01-01
Full Text Available Biogeography based optimization (BBO is a new competitive population-based algorithm inspired by biogeography. It simulates the migration of species in nature to share information. A new hybrid BBO (HBBO is presented in the paper for constrained optimization. By combining differential evolution (DE mutation operator with simulated binary crosser (SBX of genetic algorithms (GAs reasonably, a new mutation operator is proposed to generate promising solution instead of the random mutation in basic BBO. In addition, DE mutation is still integrated to update one half of population to further lead the evolution towards the global optimum and the chaotic search is introduced to improve the diversity of population. HBBO is tested on twelve benchmark functions and four engineering optimization problems. Experimental results demonstrate that HBBO is effective and efficient for constrained optimization and in contrast with other state-of-the-art evolutionary algorithms (EAs, the performance of HBBO is better, or at least comparable in terms of the quality of the final solutions and computational cost. Furthermore, the influence of the maximum mutation rate is also investigated.
Vibration control of cylindrical shells using active constrained layer damping
Ray, Manas C.; Chen, Tung-Huei; Baz, Amr M.
1997-05-01
The fundamentals of controlling the structural vibration of cylindrical shells treated with active constrained layer damping (ACLD) treatments are presented. The effectiveness of the ACLD treatments in enhancing the damping characteristics of thin cylindrical shells is demonstrated theoretically and experimentally. A finite element model (FEM) is developed to describe the dynamic interaction between the shells and the ACLD treatments. The FEM is used to predict the natural frequencies and the modal loss factors of shells which are partially treated with patches of the ACLD treatments. The predictions of the FEM are validated experimentally using stainless steel cylinders which are 20.32 cm in diameter, 30.4 cm in length and 0.05 cm in thickness. The cylinders are treated with ACLD patches of different configurations in order to target single or multi-modes of lobar vibrations. The ACLD patches used are made of DYAD 606 visco-elastic layer which is sandwiched between two layers of PVDF piezo-electric films. Vibration attenuations of 85% are obtained with maximum control voltage of 40 volts. Such attenuations are attributed to the effectiveness of the ACLD treatment in increasing the modal damping ratios by about a factor of four over those of conventional passive constrained layer damping (PCLD) treatments. The obtained results suggest the potential of the ACLD treatments in controlling the vibration of cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles.
Constraining Ceres' interior from its Rotational Motion
Rambaux, Nicolas; Dehant, Véronique; Kuchynka, Petr
2011-01-01
Context. Ceres is the most massive body of the asteroid belt and contains about 25 wt.% (weight percent) of water. Understanding its thermal evolution and assessing its current state are major goals of the Dawn Mission. Constraints on internal structure can be inferred from various observations. Especially, detailed knowledge of the rotational motion can help constrain the mass distribution inside the body, which in turn can lead to information on its geophysical history. Aims. We investigate the signature of the interior on the rotational motion of Ceres and discuss possible future measurements performed by the spacecraft Dawn that will help to constrain Ceres' internal structure. Methods. We compute the polar motion, precession-nutation, and length-of-day variations. We estimate the amplitudes of the rigid and non-rigid response for these various motions for models of Ceres interior constrained by recent shape data and surface properties. Results. As a general result, the amplitudes of oscillations in the r...
Towards weakly constrained double field theory
Lee, Kanghoon
2016-08-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Towards Weakly Constrained Double Field Theory
Lee, Kanghoon
2015-01-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X- ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Towards weakly constrained double field theory
Kanghoon Lee
2016-08-01
Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Continuation of Sets of Constrained Orbit Segments
Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki;
Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of...... constrained orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....
Geometric constrained variational calculus. III: The second variation (Part II)
Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico
2016-03-01
The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.
Constrained optimization of gradient waveforms for generalized diffusion encoding.
Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans
2015-12-01
Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences. PMID:26583528
The Distance Field Model and Distance Constrained MAP Adaptation Algorithm
YUPeng; WANGZuoying
2003-01-01
Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-
Constraining white dwarf structure and neutrino physics in 47 Tucanae
Goldsbury, Ryan; Richer, Harvey; Kalirai, Jason; Tremblay, Pier-Emmanuel
2016-01-01
We present a robust statistical analysis of the white dwarf cooling sequence in 47 Tucanae. We combine HST UV and optical data in the core of the cluster, Modules for Experiments in Stellar Evolution (MESA) white dwarf cooling models, white dwarf atmosphere models, artificial star tests, and a Markov Chain Monte Carlo (MCMC) sampling method to fit white dwarf cooling models to our data directly. We use a technique known as the unbinned maximum likelihood to fit these models to our data without binning. We use these data to constrain neutrino production and the thickness of the hydrogen layer in these white dwarfs. The data prefer thicker hydrogen layers $(q_\\mathrm{H}=3.2\\e{-5})$ and we can strongly rule out thin layers $(q_\\mathrm{H}=10^{-6})$. The neutrino rates currently in the models are consistent with the data. This analysis does not provide a constraint on the number of neutrino species.
Constraining White Dwarf Structure and Neutrino Physics in 47 Tucanae
Goldsbury, R.; Heyl, J.; Richer, H. B.; Kalirai, J. S.; Tremblay, P. E.
2016-04-01
We present a robust statistical analysis of the white dwarf cooling sequence in 47 Tucanae. We combine Hubble Space Telescope UV and optical data in the core of the cluster, Modules for Experiments in Stellar Evolution (MESA) white dwarf cooling models, white dwarf atmosphere models, artificial star tests, and a Markov Chain Monte Carlo sampling method to fit white dwarf cooling models to our data directly. We use a technique known as the unbinned maximum likelihood to fit these models to our data without binning. We use these data to constrain neutrino production and the thickness of the hydrogen layer in these white dwarfs. The data prefer thicker hydrogen layers ({q}{{H}}=3.2× {10}-5) and we can strongly rule out thin layers ({q}{{H}}={10}-6). The neutrino rates currently in the models are consistent with the data. This analysis does not provide a constraint on the number of neutrino species.
Constrained optimization of gradient waveforms for generalized diffusion encoding
Sjölund, Jens; Szczepankiewicz, Filip; Nilsson, Markus; Topgaard, Daniel; Westin, Carl-Fredrik; Knutsson, Hans
2015-12-01
Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences.
Constrained instanton and black hole creation
WU; Zhongchao; XU; Donghui
2004-01-01
A gravitational instanton is considered as the seed for the creation of a universe. However, there exist too few instantons. To include many interesting phenomena in the framework of quantum cosmology, the concept of constrained gravitational instanton is inevitable. In this paper we show how a primordial black hole is created from a constrained instanton. The quantum creation of a generic black hole in the closed or open background is completely resolved. The relation of the creation scenario with gravitational thermodynamics and topology is discussed.
Weighted Constrained Egalitarianism in TU-Games
Koster, M.A.L.
1999-01-01
The constrained egalitarian solution of Dutta and Ray (1989) for TU-games is extended to asymmetric cases, using the notion of weight systems as in Kalai and Samet (1987,1988). This weighted constrained egalitarian solution is based on the weighted Lorenz-criterion as an inequality measure. It is shown that in general there is at most one such weighted egalitarian solution for TU-games. Existence is proved for the class of convex games. Furthermore, the core of a postive valued convex game is...
Constraining Initial Vacuum by CMB Data
Chandra, Debabrata
2016-01-01
We demonstrate how one can possibly constrain the initial vacuum using CMB data. Using a generic vacuum without any particular choice a priori, thereby keeping both the Bogolyubov coefficients in the analysis, we compute observable parameters from two- and three-point correlation functions. We are thus left with constraining four model parameters from the two complex Bogolyubov coefficients. We also demonstrate a method of finding out the constraint relations between the Bogolyubov coefficients using the theoretical normalization condition and observational data of power spectrum and bispectrum from CMB. We also discuss the possible pros and cons of the analysis.
Murder and Self-constrained Modernity
Hansen, Kim Toft
Fracture”, 1999) deals with an unexplainable metaphysical horror. This short story employs a certain tragic sensibility to the narrative which no longer is a stranger to crime fiction. Arne Dahl utilizes Aeschylus’ The Oresteia which goes for two episodes of the Danish TV-series Rejseholdet (Unit One, 2002...... this paper I will approach an explanation from the point of view of what the Danish philosopher Hans Jørgen Schanz calls the self-constrained modernity: Modernity has come to realize – he explicates – that it cannot provide complete explanations of reality and, thus, it becomes self-constrained. This...
Ensemble and constrained clustering with applications
Abdala, D.D. (Daniel)
2011-01-01
Diese Arbeit stellt neue Entwicklungen in Ensemble und Constrained Clustering vor und enthält die folgenden wesentlichen Beiträge: 1) Eine Vereinigung von Constrained und Ensemble Clustering in einem einheitlichen Framework. 2) Eine neue Methode zur Messung und Visualisierung der Variabilität von Ensembles. 3) Ein neues, Random Walker basiertes Verfahren für Ensemble Clustering. 4) Anwendung von Ensemble Clustering für Bildsegmentierung. 5) Eine neue Consensus-Funktion für das Ensemble Cluste...
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Applications of the maximum entropy principle in nuclear physics
Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238U neutron cross sections in the unresolved resonance region. (orig.)
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
General Relativity as a constrained Gauge Theory
Cianci, R.; Vignolo, S.; Bruno, D
2006-01-01
The formulation of General Relativity presented in math-ph/0506077 and the Hamiltonian formulation of Gauge theories described in math-ph/0507001 are made to interact. The resulting scheme allows to see General Relativity as a constrained Gauge theory.
Integrating job scheduling and constrained network routing
Gamst, Mette
2010-01-01
This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number of...
INSTRUMENT CHOICE AND BUDGET-CONSTRAINED TARGETING
Horan, Richard D.; Claassen, Roger; Agapoff, Jean; Zhang, Wei
2004-01-01
We analyze how choosing to use a particular type of instrument for agri-environmental payments, when these payments are constrained by the regulatory authority's budget, implies an underlying targeting criterion with respect to costs, benefits, participation, and income, and the tradeoffs among these targeting criteria. The results provide insight into current policy debates.
Neutron Powder Diffraction and Constrained Refinement
Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.
1977-01-01
The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement...
Nonlinear wave equations and constrained harmonic motion
Deift, Percy; Lund, Fernando; Trubowitz, Eugene
1980-01-01
The study of the Korteweg-deVries, nonlinear Schrödinger, Sine-Gordon, and Toda lattice equations is simply the study of constrained oscillators. This is likely to be true for any nonlinear wave equation associated with a second-order linear problem.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Cui, Liang; Li, Yongping; Huang, Guohe
2016-06-01
A double-sided fuzzy chance-constrained fractional programming (DFCFP) method is developed for planning water resources management under uncertainty. In DFCFP the system marginal benefit per unit of input under uncertainty can also be balanced. The DFCFP is applied to a real case of water resources management in the Zhangweinan River Basin, China. The results show that the amounts of water allocated to the two cities (Anyang and Handan) would be different under minimum and maximum reliability degrees. It was found that the marginal benefit of the system solved by DFCFP is bigger than the system benefit under the minimum and maximum reliability degrees, which not only improve economic efficiency in the mass, but also remedy water deficiency. Compared with the traditional double-sided fuzzy chance-constrained programming (DFCP) method, the solutions obtained from DFCFP are significantly higher, and the DFCFP has advantages in water conservation.
Christlieb, Andrew J.; Liu, Yuan; Tang, Qi; Xu, Zhengfu
2014-01-01
In this paper, we utilize the maximum-principle-preserving flux limiting technique, originally designed for high order weighted essentially non-oscillatory (WENO) methods for scalar hyperbolic conservation laws, to develop a class of high order positivity-preserving finite difference WENO methods for the ideal magnetohydrodynamic (MHD) equations. Our schemes, under the constrained transport (CT) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of...
In some energy harvesting systems, the maximum displacement of the seismic mass is limited due to the physical constraints of the device. This is especially the case where energy is harvested from a vibration source with large oscillation amplitude (e.g., marine environment). For the design of inertial systems, the maximum permissible displacement of the mass is a limiting condition. In this paper the maximum output power and the corresponding efficiency of linear and rotational electromagnetic energy harvesting systems with a constrained range of motion are investigated. A unified form of output power and efficiency is presented to compare the performance of constrained linear and rotational systems. It is found that rotational energy harvesting systems have a greater capability in transferring energy to the load resistance than linear directly coupled systems, due to the presence of an extra design variable, namely the ball screw lead. Also, in this paper it is shown that for a defined environmental condition and a given proof mass with constrained throw, the amount of power delivered to the electrical load by a rotational system can be higher than the amount delivered by a linear system. The criterion that guarantees this favourable design has been obtained. (paper)
Li, Yuan; Xiong, Bo; Beghin, John C.
2013-01-01
Food safety standards have proliferated as multilateral and bilateral trade agreements constrain traditional barriers to agricultural trade. Stringent food standards can be driven by rising consumer and public concern about food safety and other social objectives, or by the lobbying efforts from domestic industries in agriculture. We investigate the economic and political determinants of the maximum residue limits (MRLs) on pesticides and veterinary drugs. Using a political economy framework ...
Estimation of Maximum Wind Speeds in Tornadoes
Dergarabedian, Paul; Fendell, Francis
2011-01-01
A method is proposed for rapidly estimating the maximum value of the azimuthal velocity component (maximum swirling speed) in tornadoes and waterspouts. The method requires knowledge of the cloud-deck height and a photograph of the funnel cloud—data usually available. Calculations based on this data confirm that the lower maximum wind speeds suggested by recent workers (roughly one-quarter of the sonic speed for sea-level air) are more plausible for tornadoes than the sonic speed sometimes ci...
Solving maximum cut problems by simulated annealing
Myklebust, Tor G. J.
2015-01-01
This paper gives a straightforward implementation of simulated annealing for solving maximum cut problems and compares its performance to that of some existing heuristic solvers. The formulation used is classical, dating to a 1989 paper of Johnson, Aragon, McGeoch, and Schevon. This implementation uses no structure peculiar to the maximum cut problem, but its low per-iteration cost allows it to find better solutions than were previously known for 40 of the 89 standard maximum cut instances te...
Cosmogenic photons strongly constrain UHECR source models
van Vliet, Arjen
2016-01-01
With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.
Constraining dark energy interacting models with WMAP
Olivares, G; Pavón, D; Olivares, German; Atrio-Barandela, Fernando; Pavon, Diego
2006-01-01
We determine the range of parameter space of an interacting quintessence (IQ) model that best fits the luminosity distance of type Ia supernovae data and the recent WMAP measurements of Cosmic Microwave Background temperature anisotropies. Models in which quintessence decays into dark matter provide a clean explanation for the coincidence problem. We focus on cosmological models of zero spatial curvature. We show that if the dark energy (DE) decays into cold dark matter (CDM) at a rate that brings the ratio of matter to dark energy constant at late times, the supernovae data are not sufficient to constrain the interaction parameter. On the contrary, WMAP data constrain it to be smaller than $c^2 < 10^{-2}$ at the $3\\sigma$ level. Accurate measurements of the Hubble constant and the dark energy density, independent of the CMB data, would support/disprove this set of models.
Hyperbolicity and Constrained Evolution in Linearized Gravity
Matzner, R A
2005-01-01
Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them ({\\it unconstrained} evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly "more correct" to introduce a scheme which actively maintains the constraints by solution ({\\it constrained} evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with th...
Constraining the braneworld with gravitational wave observations.
McWilliams, Sean T
2010-04-01
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm. PMID:20481929
Doubly Constrained Robust Blind Beamforming Algorithm
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Efficient caching for constrained skyline queries
Mortensen, Michael Lind; Chester, Sean; Assent, Ira; Magnani, Matteo
Constrained skyline queries retrieve all points that optimize some user’s preferences subject to orthogonal range constraints, but at significant computational cost. This paper is the first to propose caching to improve constrained skyline query response time. Because arbitrary range constraints...... are unlikely to match a cached query exactly, our proposed method identifies and exploits similar cached queries to reduce the computational overhead of subsequent ones. We consider interactive users posing a string of similar queries and show how these can be classified into four cases based on how...... they overlap cached queries. For each we present a specialized solution. For the general case of independent users, we introduce the Missing Points Region (MPR), that minimizes disk reads, and an approximation of the MPR. An extensive experimental evaluation reveals that the querying for an...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum mass, moment of inertia and compactness of relativistic stars
Breu, Cosima; Rezzolla, Luciano
2016-06-01
A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as `universal relations' and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalized Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, Mmax, simply in terms of the maximum mass of the non-rotating configuration, M_{_TOV}, finding that M_max ≃ (1.203 ± 0.022) M_{_TOV} for all the equations of state we have considered. We further introduce an improvement to previously published universal relations by Lattimer & Schutz between the dimensionless moment of inertia and the stellar compactness, which could provide an accurate tool to constrain the equation of state of nuclear matter when measurements of the moment of inertia become available.
The consequence of maximum thermodynamic efficiency in Daisyworld.
Pujol, Toni
2002-07-01
The imaginary planet of Daisyworld is the simplest model used to illustrate the implications of the Gaia hypothesis. The dynamics of daisies and their radiative interaction with the environment are described by fundamental equations of population ecology theory and physics. The parameterization of the turbulent energy flux between areas of different biological cover is similar to the diffusive-type approximation used in simple climate models. Here I show that the small variation of the planetary diffusivity adopted in the classical version of Daisyworld limits the range of values for the solar insolation for which biota may grow in the planet. Recent studies suggest that heat transport in a turbulent medium is constrained to maximize its efficiency. This condition is almost equivalent to maximizing the rate of entropy production due to non-radiative sources. Here, I apply the maximum entropy principle (MEP) to Daisyworld. I conclude that the MEP sets the maximum range of values for the solar insolation with a non-zero amount of daisies. Outside this range, daisies cannot grow in the planet for any physically realistic climate distribution. Inside this range, I assume a distribution of daisies in agreement with the MEP. The results substantially enlarge the range of climate stability, due to the biota, in comparison to the classical version of Daisyworld. A very stable temperature is found when two different species grow in the planet. PMID:12183130
Capacity constrained assignment in spatial databases
U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos;
2008-01-01
Given a point set P of customers (e.g., WiFi receivers) and a point set Q of service providers (e.g., wireless access points), where each q 2 Q has a capacity q.k, the capacity constrained assignment (CCA) is a matching M Q × P such that (i) each point q 2 Q (p 2 P) appears at most k times (at most...
Resource allocation for delay constrained wireless communications
Chen, J.
2010-01-01
The ultimate goal of future generation wireless communications is to provide ubiquitous seamless connections between mobile terminals such as mobile phones and computers so that users can enjoy high-quality services at anytime anywhere without wires. The feature to provide a wide range of delay constrained applications with diverse quality of service (QoS) requirements, such as delay and data rate requirements, will require QoS-driven wireless resource allocation mechanisms to efficiently ...
Constrained optimization in expensive simulation: novel approach.
Jack P. C. Kleijnen; van Beers, Wim; VAN NIEUWENHUYSE, Inneke
2010-01-01
This article presents a novel heuristic for constrained optimization of computationally expensive random simulation models. One output is selected as objective to be minimized, while other outputs must satisfy given theshold values. Moreover, the simulation inputs must be integer and satisfy linear or nonlinear constraints. The heuristic combines (i) sequentialized experimental designs to specify the simulation input combinations, (ii) Kriging (or Gaussian process or spatial correlation model...
Constrained optimization in simulation: a novel approach.
Jack P. C. Kleijnen; van Beers, W.C.M.; van Nieuwenhuyse, I.
2008-01-01
This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations...
Performance Characteristics of Active Constrained Layer Damping
A. Baz; J. Ro
1995-01-01
Theoretical and experimental performance characteristics of the new class of actively controlled constrained layer damping (ACLD) are presented. The ACLD consists of a viscoelastic damping layer sandwiched between two layers of piezoelectric sensor and actuator. The composite ACLD when bonded to a vibrating structure acts as a “smart” treatment whose shear deformation can be controlled and tuned to the structural response in order to enhance the energy dissipation mechanism and improve the vi...
NEW SIMULATED ANNEALING ALGORITHMS FOR CONSTRAINED OPTIMIZATION
LINET ÖZDAMAR; CHANDRA SEKHAR PEDAMALLU
2010-01-01
We propose a Population based dual-sequence Non-Penalty Annealing algorithm (PNPA) for solving the general nonlinear constrained optimization problem. The PNPA maintains a population of solutions that are intermixed by crossover to supply a new starting solution for simulated annealing throughout the search. Every time the search gets stuck at a local optimum, this crossover procedure is triggered and simulated annealing search re-starts from a new subspace. In both the crossover and simulate...
NTRU software implementation for constrained devices
Monteverde Giacomino, Mariano
2008-01-01
The NTRUEncrypt is a public-key cryptosystem based on the shortest vector problem. Its main characteristics are the low memory and computational requirements while providing a high security level. This document presents an implementation and optimization of the NTRU public-key cryptosys- tem for constrained devices. Speci cally the NTRU cryptosystem has been implemented on the ATMega128 and the ATMega163 microcontrollers. This has turned in a major e ort in order to reduce t...
Modelling time-constrained software development
Powell, A.
2004-01-01
Commercial pressures on time-to-market often require the development of software in situations where deadlines are very tight and non-negotiable. This type of development can be termed ‘time-constrained software development.’ The need to compress development timescales influences both the software process and the way it is managed. Conventional approaches to modelling tend to treat the development process as being linear, sequential and static. Whereas, the processes used to achieve timescale...
Cosmicflows Constrained Local UniversE Simulations
Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo
2016-01-01
This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.
Constrained simulation of the Bullet Cluster
In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.
Hybrid evolutionary programming for heavily constrained problems.
Myung, H; Kim, J H
1996-01-01
A hybrid of evolutionary programming (EP) and a deterministic optimization procedure is applied to a series of non-linear and quadratic optimization problems. The hybrid scheme is compared with other existing schemes such as EP alone, two-phase (TP) optimization, and EP with a non-stationary penalty function (NS-EP). The results indicate that the hybrid method can outperform the other methods when addressing heavily constrained optimization problems in terms of computational efficiency and solution accuracy. PMID:8833746
Optimal auctions with financially constrained bidders
Pai, Mallesh; Rakesh V. Vohra
2008-01-01
We consider an environment where potential buyers of an indi- visible good have liquidity constraints, in that they cannot pay more than their `budget' regardless of their valuation. A buyer's valuation for the good as well as her budget are her private information. We derive constrained-efficient and revenue maximizing auctions for this setting. In general, the optimal auction requires `pooling' both at the top and in the middle despite the maintained assumption of a mono- tone hazard rate. ...
Constrained efficient locations under delivered pricing
Pires, Cesaltina
2005-01-01
In this article, we extend previous results on competitive delivered pricing by considering the second-best problem in which the social planner can regulate firm’s locations but not their pricing. Assuming constant marginal costs, we show that the constrained socially optimal locations are an equilibrium of the location-price game when: (i) demand is perfectly inelastic and (ii) demand is price sensitive but firms practice first-degree price discrimination. However, with elastic demand ...
Pricing behaviour at capacity constrained facilities
Huric Larsen, Jesper Fredborg
2012-01-01
Entry of new firms can be difficult or even impossible at capacity constrained facilities, despite the actual cost of entering is low. Using a game theoretic model of incumbent firms’ pricing behaviour under these conditions, it is found that under the assumption of Bertrand competition and firms having different costs, the optimal pricing behaviour imply price stickiness and upward pricing. The findings further suggest a competitive behaviour of incumbents of disposing weaker opponents only ...
Classical Dynamics as Constrained Quantum Dynamics
Bartlett, Stephen D.; Rowe, David J.
2002-01-01
We show that the classical mechanics of an algebraic model are implied by its quantizations. An algebraic model is defined, and the corresponding classical and quantum realizations are given in terms of a spectrum generating algebra. Classical equations of motion are then obtained by constraining the quantal dynamics of an algebraic model to an appropriate coherent state manifold. For the cases where the coherent state manifold is not symplectic, it is shown that there exist natural projectio...
Capturing Hotspots For Constrained Indoor Movement
Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua
2013-01-01
Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the...
Constraining RRc candidates using SDSS colours
Bányai, E; Molnár, L; Dobos, L; Szabó, R
2016-01-01
The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.
Cosmicflows Constrained Local UniversE Simulations
Sorce, Jenny G; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M; Steinmetz, Matthias; Tully, R Brent; Pomarede, Daniel; Carlesi, Edoardo
2015-01-01
This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observatio...
13 CFR 130.440 - Maximum grant.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum grant. 130.440 Section 130.440 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS DEVELOPMENT CENTERS § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...... predictions. The results imply that physical processes control maximum particle concentrations in planktonic systems....
An axiomatic characterization of the strong constrained egalitarian solution
Llerena, Francesc; Vilella, Cori
2012-09-01
In this paper we axiomatize the strong constrained egalitarian solution (Dutta and Ray, 1991) over the class of weak superadditive games using constrained egalitarianism, order-consistency, and converse order-consistency.
Deriving N-soliton solutions via constrained flows
Zeng, Yunbo
2000-01-01
The soliton equations can be factorized by two commuting x- and t-constrained flows. We propose a method to derive N-soliton solutions of soliton equations directly from the x- and t-constrained flows.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)
Cascading Constrained 2-D Arrays using Periodic Merging Arrays
Forchhammer, Søren; Laursen, Torben Vaarby
2003-01-01
We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes....... Numerical results for the capacities are presented....
An axiomatic characterization of the strong constrained egalitarian solution
Llerena Garrés, Francesc; Vilella Bach, Misericòrdia
2012-01-01
In this paper we axiomatize the strong constrained egalitarian solution (Dutta and Ray, 1991) over the class of weak superadditive games using constrained egalitarianism, order-consistency, and converse order-consistency. JEL classification: C71, C78. Keywords: Cooperative TU-game, strong constrained egalitarian solution, axiomatization.
21 CFR 888.3230 - Finger joint polymer constrained prosthesis.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device intended... generic type of device includes prostheses that consist of a single flexible across-the-joint...
21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888.3780 Section 888.3780 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...
The Maximum Likelihood Threshold of a Graph
Gross, Elizabeth; Sullivant, Seth
2014-01-01
The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...
Quantization of soluble classical constrained systems
Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)
2014-12-15
The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.
Incomplete Dirac reduction of constrained Hamiltonian systems
Chandre, C., E-mail: chandre@cpt.univ-mrs.fr
2015-10-15
First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.
Estimation in chance-constrained problem
Houda, Michal
Hradec Králové : Gaudeamus, 2005 - (Skalská, H.), s. 134-139 ISBN 978-80-7041-535-1. [Mathematical Methods in Economics 2005 /23./. Hradec Králové (CZ), 14.09.2005-16.09.2005] R&D Projects: GA ČR GD402/03/H057; GA ČR GA402/04/1294; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance-constrained problem * estimation * economic applications Subject RIV: BB - Applied Statistics, Operational Research
Utility Constrained Energy Minimization In Aloha Networks
Khodaian, Amir Mahdi; Talebi, Mohammad S
2010-01-01
In this paper we consider the issue of energy efficiency in random access networks and show that optimizing transmission probabilities of nodes can enhance network performance in terms of energy consumption and fairness. First, we propose a heuristic power control method that improves throughput, and then we model the Utility Constrained Energy Minimization (UCEM) problem in which the utility constraint takes into account single and multi node performance. UCEM is modeled as a convex optimization problem and Sequential Quadratic Programming (SQP) is used to find optimal transmission probabilities. Numerical results show that our method can achieve fairness, reduce energy consumption and enhance lifetime of such networks.
How peer-review constrains cognition
Cowley, Stephen
2015-01-01
‘cognition’ describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as ‘symbolizations’, replicable patterns that use technologically enabled activity. On this bio-cognitive...... came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper’s knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in...
Constraining Milky Way mass with Hypervelocity Stars
Fragione, Giacomo
2016-01-01
We show that hypervelocity stars (HVSs) ejected from the center of the Milky Way galaxy can be used to constrain the mass of its halo. The asymmetry in the radial velocity distribution of halo stars due to escaping HVSs depends on the halo potential (escape speed) as long as the round trip orbital time is shorter than the stellar lifetime. Adopting a characteristic HVS travel time of $300$ Myr, which corresponds to the average mass of main sequence HVSs ($3.2$ M$_{\\odot}$), we find that current data favors a mass for the Milky Way in the range $(1.2$-$1.7)\\times 10^{12} \\mathrm{M}_\\odot$.
On Types of Observables in Constrained Theories
Anderson, Edward
2016-01-01
The Kuchar observables notion is shown to apply only to a limited range of theories. Relational mechanics, slightly inhomogeneous cosmology and supergravity are used as examples that require further notions of observables. A suitably general notion of A-observables is then given to cover all of these cases. `A' here stands for `algebraic substructure'; A-observables can be defined by association with each closed algebraic substructure of a theory's constraints. Both constrained algebraic structures and associated notions of A-observables form bounded lattices.
Constrained control problems of discrete processes
Phat, Vu Ngoc
1996-01-01
The book gives a novel treatment of recent advances on constrained control problems with emphasis on the controllability, reachability of dynamical discrete-time systems. The new proposed approach provides the right setting for the study of qualitative properties of general types of dynamical systems in both discrete-time and continuous-time systems with possible applications to some control engineering models. Most of the material appears for the first time in a book form. The book is addressed to advanced students, postgraduate students and researchers interested in control system theory and
ADAPTIVE SUBOPTIMAL CONTROL OF INPUT CONSTRAINED PLANTS
Valerii Azarskov
2011-03-01
Full Text Available Abstract. This paper deals with adaptive regulation of a discrete-time linear time-invariant plant witharbitrary bounded disturbances whose control input is constrained to lie within certain limits. The adaptivecontrol algorithm exploits the one-step-ahead control strategy and the gradient projection type estimationprocedure using the modified dead zone. The convergence property of the estimation algorithm is shown tobe ensured. The sufficient conditions guaranteeing the global asymptotical stability and simultaneously thesuboptimality of the closed-loop systems are derived. Numerical examples and simulations are presented tosupport the theoretical results.
Incomplete Dirac reduction of constrained Hamiltonian systems
First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified
Can Neutron stars constrain Dark Matter?
Kouvaris, Christoforos; Tinyakov, Peter
2010-01-01
We argue that observations of old neutron stars can impose constraints on dark matter candidates even with very small elastic or inelastic cross section, and self-annihilation cross section. We find that old neutron stars close to the galactic center or in globular clusters can maintain a surface...... temperature that could in principle be detected. Due to their compactness, neutron stars can acrete WIMPs efficiently even if the WIMP-to-nucleon cross section obeys the current limits from direct dark matter searches, and therefore they could constrain a wide range of dark matter candidates....
Nielsen, O. F.; Ploug, C.; Mendoza, J. A.; Martínez, K.
2009-05-01
The need for increaseding accuracy and reduced ambiguities in the inversion results has resulted in focus on the development of more advanced inversion methods of geophysical data. Over the past few years more advanced inversion techniques have been developed to improve the results. Real 3D-inversion is time consuming and therefore often not the best solution in a cost-efficient perspective. This has motivated the development of 3D constrained inversions, where 1D-models are constrained in 3D, also known as a Spatial Constrained Inversion (SCI). Moreover, inversion of several different data types in one inversion has been developed, known as Mutually Constrained Inversion (MCI). In this paper a presentation of a Spatial Mutually Constrained Inversion method (SMCI) is given. This method allows 1D-inversion applied to different geophysical datasets and geological information constrained in 3D. Application of two or more types of geophysical methods in the inversion has proved to reduce the equivalence problem and to increase the resolution in the inversion results. The use of geological information from borehole data or digital geological models can be integrated in the inversion. In the SMCI, a 1D inversion code is used to model soundings that are constrained in three dimensions according to their relative position in space. This solution enhances the accuracy of the inversion and produces distinct layers thicknesses and resistivities. It is very efficient in the mapping of a layered geology but still also capable of mapping layer discontinuities that are, in many cases, related to fracturing and faulting or due to valley fills. Geological information may be included in the inversion directly or used only to form a starting model for the individual soundings in the inversion. In order to show the effectiveness of the method, examples are presented from both synthetic data and real data. The examples include DC-soundings as well as land-based and airborne TEM
Lepton Flavour Violation in the Constrained MSSM with Constrained Sequential Dominance
Antusch, Stefan
2008-01-01
We consider charged Lepton Flavour Violation (LFV) in the Constrained Minimal Supersymmetric Standard Model, extended to include the see-saw mechanism with Constrained Sequential Dominance (CSD), where CSD provides a natural see-saw explanation of tri-bimaximal neutrino mixing. When charged lepton corrections to tri-bimaximal neutrino mixing are included, we discover characteristic correlations among the LFV branching ratios, depending on the mass ordering of the right-handed neutrinos, with a pronounced dependence on the leptonic mixing angle $\\theta_{13}$ (and in some cases also on the Dirac CP phase $\\delta$).
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
Remarks on the maximum correlation coefficient
Dembo, Amir; Kagan, Abram; Shepp, Lawrence A.
2001-01-01
The maximum correlation coefficient between partial sums of independent and identically distributed random variables with finite second moment equals the classical (Pearson) correlation coefficient between the sums, and thus does not depend on the distribution of the random variables. This result is proved, and relations between the linearity of regression of each of two random variables on the other and the maximum correlation coefficient are discussed.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
Probalistic logic programming under maximum entropy
Lukasiewicz, Thomas; Kern-Isberner, Gabriele
1999-01-01
In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a main result of this paper, we introduce an efficient technique for approximative p...
Maximum confidence measurements via probabilistic quantum cloning
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states. In this paper, we show that if incorrect copies are allowed to be produced, linearly dependent quantum states may also be cloned by the PQC. By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states, we derive the upper bound of the maximum confidence measure of a set. An explicit transformation of the maximum confidence measure is presented
Linear inverse problems the maximum entropy connection
Gzyl, Henryk
2011-01-01
This book describes a useful tool for solving linear inverse problems subject to convex constraints. The method of maximum entropy in the mean automatically takes care of the constraints. It consists of a technique for transforming a large dimensional inverse problem into a small dimensional non-linear variational problem. A variety of mathematical aspects of the maximum entropy method are explored as well. Supplementary materials are not included with eBook edition (CD-ROM)
Simulated Maximum Likelihood using Tilted Importance Sampling
Christian N. Brinch
2008-01-01
Abstract: This paper develops the important distinction between tilted and simple importance sampling as methods for simulating likelihood functions for use in simulated maximum likelihood. It is shown that tilted importance sampling removes a lower bound to simulation error for given importance sample size that is inherent in simulated maximum likelihood using simple importance sampling, the main method for simulating likelihood functions in the statistics literature. In addit...
Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression
Bera, A. K.; Galvao Jr, A. F.; Montes-Rojas, G.; Park, S. Y.
2010-01-01
This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters toge...
Constraining the braking indices of magnetars
Gao, Z. F.; Li, X.-D.; Wang, N.; Yuan, J. P.; Wang, P.; Peng, Q. H.; Du, Y. J.
2016-02-01
Because of the lack of long-term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index n of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of the mean braking indices of eight magnetars with SNRs, and find that they cluster in the range of 1-42. Five magnetars have smaller mean braking indices of 1 wind-aided braking. The larger mean braking indices of n > 3 for the other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with 1 3, within the updated magneto-thermal evolution models. Although the constrained range of the magnetars' braking indices is tentative, as a result of the uncertainties in the SNR ages due to distance uncertainties and the unknown conditions of the expanding shells, our method provides an effective way to constrain the magnetars' braking indices if the measurements of the SNR ages are reliable, which can be improved by future observations.
Constraining the Braking Indices of Magnetars
Gao, Z F; Wang, N; Yuan, J P; Peng, Q H; Du, Y J
2015-01-01
Due to the lack of long term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index $n$ of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of $n$ of nine magnetars with SNRs, and find that they cluster in a range of $1\\sim$41. Six magnetars have smaller braking indices of $13$ for other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with $13$ within the updated magneto-thermal evolution models. We point out that there could be some connections between the magnetar's anti-glitch event and its braking index, and the magnitude of $n$ should be taken into account when explaining the event. Although the constrained range of the magnetars' braking indices is tentative, our method provides an effective way to constrain the magnetars' braking indices if th...
Pole shifting with constrained output feedback
The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system
Constraining dark matter through 21-cm observations
Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.
2007-05-01
Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.
Constraining the braneworld with gravitational wave observations
McWilliams, Sean T
2009-01-01
Braneworld models containing large extra dimensions may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model, the asymptotic AdS radius of curvature of the extra dimension supports a single bound state of the massless graviton on the brane, thereby avoiding gross violations of Newton's law. However, one possible consequence of this model is an enormous increase in the amount of Hawking radiation emitted by black holes. This consequence has been employed by other authors to attempt to constrain the AdS radius of curvature through the observation of black holes. I present two novel methods for constraining the AdS curvature. The first method results from the effect of this enhanced mass loss on the event rate for extreme mass ratio inspirals (EMRIs) detected by the space-based LISA interferometer. The second method results from the observation of an individually resolvable galactic black hole binary with LISA. I show that the ...
Changes in epistemic frameworks: Random or constrained?
Ananka Loubser
2012-11-01
Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three.
Constraining the halo mass function with observations
Castro, Tiago; Marra, Valerio; Quartin, Miguel
2016-08-01
The abundances of dark matter halos in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the halo mass function. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Maximum Entropy Approaches to Living Neural Networks
John M. Beggs
2010-01-01
Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.
A multi-level solver for Gaussian constrained CMB realizations
Seljebotn, D S; Jewell, J B; Eriksen, H K; Bull, P
2013-01-01
We present a multi-level solver for drawing constrained Gaussian realizations or finding the maximum likelihood estimate of the CMB sky, given noisy sky maps with partial sky coverage. The method converges substantially faster than existing Conjugate Gradient (CG) methods for the same problem. For instance, for the 143 GHz Planck frequency channel, only 3 multi-level W-cycles result in an absolute error smaller than 1 microkelvin in any pixel. Using 16 CPU cores, this translates to a computational expense of 6 minutes wall time per realization, plus 8 minutes wall time for a power spectrum-dependent precomputation. Each additional W-cycle reduces the error by more than an order of magnitude, at an additional computational cost of 2 minutes. For comparison, we have never been able to achieve similar absolute convergence with conventional CG methods for this high signal-to-noise data set, even after thousands of CG iterations and employing expensive preconditioners. The solver is part of the Commander 2 code, w...
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
A constrained-transport magnetohydrodynamics algorithm with near-spectral resolution
Maron, Jason; Oishi, Jeffrey
2007-01-01
Numerical simulations including magnetic fields have become important in many fields of astrophysics. Evolution of magnetic fields by the constrained transport algorithm preserves magnetic divergence to machine precision, and thus represents one preferred method for the inclusion of magnetic fields in simulations. We show that constrained transport can be implemented with volume-centered fields and hyperresistivity on a high-order finite difference stencil. Additionally, the finite-difference coefficients can be tuned to enhance high-wavenumber resolution. Similar techniques can be used for the interpolations required for dealiasing corrections at high wavenumber. Together, these measures yield an algorithm with a wavenumber resolution that approaches the theoretical maximum achieved by spectral algorithms. Because this algorithm uses finite differences instead of fast Fourier transforms, it runs faster and isn't restricted to periodic boundary conditions. Also, since the finite differences are spatially loca...
Perceived visual speed constrained by image segmentation
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Sampling Motif-Constrained Ensembles of Networks
Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.
2015-10-01
The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.
Sampling motif-constrained ensembles of networks
Fischer, Rico; Peixoto, Tiago P; Altmann, Eduardo G
2015-01-01
The statistical significance of network properties is conditioned on null models which satisfy spec- ified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency, or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this paper we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, net- works with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.
Constraining dark sectors with monojets and dijets
We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.
Shape space exploration of constrained meshes
Yang, Yongliang
2011-01-01
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.
Shape space exploration of constrained meshes
Yang, Yongliang
2011-12-12
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.
How alive is constrained SUSY really?
Bechtle, Philip; Dreiner, Herbert K; Hamer, Matthias; Krämer, Michael; O'Leary, Ben; Porod, Werner; Sarrazin, Björn; Stefaniak, Tim; Uhlenbrock, Mathias; Wienemann, Peter
2014-01-01
Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Using the framework Fittino, we perform global fits of the CMSSM to the toy data and find that this model is excluded at more than 95% confidence level.
A Constrained Tectonics Model for Coronal Heating
Ng, C S; 10.1086/525518
2011-01-01
An analytical and numerical treatment is given of a constrained version of the tectonics model developed by Priest, Heyvaerts, & Title [2002]. We begin with an initial uniform magnetic field ${\\bf B} = B_0 \\hat{\\bf z}$ that is line-tied at the surfaces $z = 0$ and $z = L$. This initial configuration is twisted by photospheric footpoint motion that is assumed to depend on only one coordinate ($x$) transverse to the initial magnetic field. The geometric constraints imposed by our assumption precludes the occurrence of reconnection and secondary instabilities, but enables us to follow for long times the dissipation of energy due to the effects of resistivity and viscosity. In this limit, we demonstrate that when the coherence time of random photospheric footpoint motion is much smaller by several orders of magnitude compared with the resistive diffusion time, the heating due to Ohmic and viscous dissipation becomes independent of the resistivity of the plasma. Furthermore, we obtain scaling relations that su...
Constraining decaying dark matter with neutron stars
Perez-Garcia, M Angeles
2015-01-01
We propose that the existing population of neutron stars in the galaxy can help constrain the nature of decaying dark matter. The amount of decaying dark matter, accumulated in the central regions in neutron stars and the energy deposition rate from decays, may set a limit on the neutron star survival rate against transitions to more compact stars and, correspondingly, on the dark matter particle decay time, $\\tau_{\\chi}$. We find that for lifetimes ${\\tau_{\\chi}}\\lesssim 6.3\\times 10^{15}$ s, we can exclude particle masses $(m_{\\chi}/ \\rm TeV) \\gtrsim 50$ or $(m_{\\chi}/ \\rm TeV) \\gtrsim 8 \\times 10^2$ in the bosonic and fermionic cases, respectively. In addition, we also compare our findings with the present status of allowed phase space regions using kinematical variables for decaying dark matter, obtaining complementary results.
On Quantum Communication Channels with Constrained Inputs
Holevo, A S
1997-01-01
The purpose of this work is to extend the result of previous papers quant-ph/9611023, quant-ph/9703013 to quantum channels with additive constraints onto the input signal, by showing that the capacity of such channel is equal to the supremum of the entropy bound with respect to all apriori distributions satisfying the constraint. We also make an extension to channels with continuous alphabet. As an application we prove the formula for the capacity of the quantum Gaussian channel with constrained energy of the signal, establishing the asymptotic equivalence of this channel to the semiclassical photon channel. We also study the lower bounds for the reliability function of the pure-state Gaussian channel.
Disappearance and Creation of Constrained Amorphous Phase
Cebe, Peggy; Lu, Sharon X.
1997-03-01
We report observation of the disappearance and recreation of rigid, or constrained, amorphous phase by sequential thermal annealing. Tempera- ture modulated differential scanning calorimetry (MDSC) is used to study the glass transition and lower melting endotherm after annealing. Cold crystallization of poly(phenylene sulfide), PPS, at a temperature just above Tg creates an initial large fraction of rigid amorphous phase (RAP). Brief, rapid annealing to a higher temperature causes RAP almost to disappear completely. Subsequent reannealing at the original lower temperature restores RAP to its original value. At the same time that RAP is being removed, Tg decreases; when RAP is restored, Tg also returns to its initial value. The crystal fraction remains unaffected by the annealing sequence.
Multiple Clustering Views via Constrained Projections
Dang, Xuan-Hong; Assent, Ira; Bailey, James
2012-01-01
Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both...
Statistical mechanics of budget-constrained auctions
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
Statistical mechanics of budget-constrained auctions
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise
Scheduling of resource-constrained projects
Klein, Robert
2000-01-01
Project management has become a widespread instrument enabling organizations to efficiently master the challenges of steadily shortening product life cycles, global markets and decreasing profit margins. With projects increasing in size and complexity, their planning and control represents one of the most crucial management tasks. This is especially true for scheduling, which is concerned with establishing execution dates for the sub-activities to be performed in order to complete the project. The ability to manage projects where resources must be allocated between concurrent projects or even sub-activities of a single project requires the use of commercial project management software packages. However, the results yielded by the solution procedures included are often rather unsatisfactory. Scheduling of Resource-Constrained Projects develops more efficient procedures, which can easily be integrated into software packages by incorporated programming languages, and thus should be of great interest for practiti...
Constraining dark sectors with monojets and dijets
Chala, Mikael; Kahlhoefer, Felix; Nardini, Germano; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); McCullough, Matthew [European Organization for Nuclear Research (CERN), Geneva (Switzerland). Theory Div.
2015-03-15
We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever - precisely due to its sizeable interactions with the visible sector - the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.
Maximum-Bandwidth Node-Disjoint Paths
Mostafa H. Dahshan
2012-03-01
Full Text Available This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP. The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time
On the Maximum Enstrophy Growth in Burgers Equation
The regularity of solutions of the three-dimensional Navier-Stokes equation is controlled by the boundedness of the enstrophy ε. The best estimate available to-date for its rate of growth is dε/dt ≤ Cε3, where C > 0, which was recently found to be sharp by Lu and Doering (2008). Applying straightforward time-integration to this instantaneous estimate leads to the possibility of loss of regularity in finite time, the so-called blow-up, and therefore the central question is to establish sharpness of such finite-time bounds. We consider an analogous problem for Burgers equation which is used as a toy model. The problem of saturation of finite-time estimates for the enstrophy growth is stated as a PDE-constrained optimization problem, where the control variable φ represents the initial condition, which is solved numerically for a wide range of time windows T > 0 and initial enstrophies ε0. We find that the maximum enstrophy growth in finite time scales as ε0α with α ≈ 3/2. The exponent is smaller than α = 3 predicted by analytic means, therefore suggesting lack of sharpness of analytical estimates.
Reconstructing the history of dark energy using maximum entropy
Zunckel, C
2007-01-01
We present a Bayesian technique based on a maximum entropy method to reconstruct the dark energy equation of state $w(z)$ in a non--parametric way. This MaxEnt technique allows to incorporate relevant prior information while adjusting the degree of smoothing of the reconstruction in response to the structure present in the data. After demonstrating the method on synthetic data, we apply it to current cosmological data, separately analysing type Ia supernovae measurement from the HST/GOODS program and the first year Supernovae Legacy Survey (SNLS), complemented by cosmic microwave background and baryonic acoustic oscillations data. We find that the SNLS data are compatible with $w(z) = -1$ at all redshifts $0 \\leq z \\lsim 1100$, with errorbars of order 20% for the most constraining choice of priors and model. The HST/GOODS data exhibit a slight (about $1\\sigma$ significance) preference for $w>-1$ at $z\\sim 0.5$ and a drift towards $w>-1$ at larger redshifts, which however is not robust with respect to changes ...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.;
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used or...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum specific runoff; 1 : 2 000 000
On this map the maximum specific runoff (map scale 1 : 2 000 000) on the territory of the Slovak Republic are shown. Isolines express the maximum specific runoff (m3 s-1 km-2) with the occurrence probability equalling to once in 100 years. These specific runoffs derive from hydrological orders for the referential period of 1931 - 1980. Processing was based on 140 hydrological orders of catchment with an area under 100 km2 and approximately 40 catchment with an area below 250 km2. (authors)
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu;
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Topics in Bayesian statistics and maximum entropy
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Maximum earthquake magnitudes along different sections of the North Anatolian fault zone
Bohnhoff, Marco; Martínez-Garzón, Patricia; Bulut, Fatih; Stierle, Eva; Ben-Zion, Yehuda
2016-04-01
Constraining the maximum likely magnitude of future earthquakes on continental transform faults has fundamental consequences for the expected seismic hazard. Since the recurrence time for those earthquakes is typically longer than a century, such estimates rely primarily on well-documented historical earthquake catalogs, when available. Here we discuss the maximum observed earthquake magnitudes along different sections of the North Anatolian Fault Zone (NAFZ) in relation to the age of the fault activity, cumulative offset, slip rate and maximum length of coherent fault segments. The findings are based on a newly compiled catalog of historical earthquakes in the region, using the extensive literary sources that exist owing to the long civilization record. We find that the largest M7.8-8.0 earthquakes are exclusively observed along the older eastern part of the NAFZ that also has longer coherent fault segments. In contrast, the maximum observed events on the younger western part where the fault branches into two or more strands are smaller. No first-order relations between maximum magnitudes and fault offset or slip rates are found. The results suggest that the maximum expected earthquake magnitude in the densely populated Marmara-Istanbul region would probably not exceed M7.5. The findings are consistent with available knowledge for the San Andreas Fault and Dead Sea Transform, and can help in estimating hazard potential associated with different sections of large transform faults.
Probable maximum floods: Making a collective judgment
A critical review is presented of current procedures for estimation of the probable maximum flood (PMF). The historical development of the concept and the flaws in current PMF methodology are discussed. The probable maximum flood concept has been criticized by eminent hydrologists on the basis that it violates scientific principles, and has been questioned from a philosophical viewpoint particularly with regard to the implications of a no-risk criterion. The PMF is not a probable maximum flood, and is less by an arbitrary amount. A more appropriate term would be 'conceivable catastrophic flood'. The methodology for estimating probable maximum precipitation is reasonably well defined and has to a certain extent been verified. The methodology for estimating PMF is not well defined and has not been verified. The use of the PMF concept primarily reflects a need for engineering expediency and does not meet the standards for scientific truth. As the PMF is an arbitrary concept, collective judgement is an important component of making PMF estimates. The Canadian Dam Safety Association should play a leading role in developing guidelines and standards. 18 refs
Weak scale from the maximum entropy principle
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage Srad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether Srad actually becomes maximum at the observed values. In this paper, we regard Srad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh=O(300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh∼TBBN2/(Mplye5), where ye is the Yukawa coupling of electron, TBBN is the temperature at which the Big Bang nucleosynthesis starts, and Mpl is the Planck mass
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
5 CFR 1600.22 - Maximum contributions.
2010-01-01
... election. (3) A participant who has both a civilian and a uniformed services account can make catch-up... contribution will be limited only by the provisions of the Internal Revenue Code (26 U.S.C.). (2) CSRS and uniformed services percentage limit. The maximum employee contribution from basic pay for a CSRS...
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Connectome graphs and maximum flow problems
Daugulis, Peteris
2014-01-01
We propose to study maximum flow problems for connectome graphs. We suggest a few computational problems: finding vertex pairs with maximal flow, finding new edges which would increase the maximal flow. Initial computation results for some publicly available connectome graphs are described.
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.; Bach Andersen, J.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum...
Maximum Possible Transverse Velocity in Special Relativity.
Medhekar, Sarang
1991-01-01
Using a physical picture, an expression for the maximum possible transverse velocity and orientation required for that by a linear emitter in special theory of relativity has been derived. A differential calculus method is also used to derive the expression. (Author/KR)
Comparing maximum pressures in internal combustion engines
Sparrow, Stanwood W; Lee, Stephen M
1922-01-01
Thin metal diaphragms form a satisfactory means for comparing maximum pressures in internal combustion engines. The diaphragm is clamped between two metal washers in a spark plug shell and its thickness is chosen such that, when subjected to explosion pressure, the exposed portion will be sheared from the rim in a short time.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
On maximum cycle packings in polyhedral graphs
Peter Recht; Stefan Stehling
2014-01-01
This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Instance optimality of the adaptive maximum strategy
Diening, Lars; Kreuzer, Christian; Stevenson, Rob
2013-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) `maximum marking strategy' is `instance optimal' for the `total error', being the sum of the energy error and the oscillation. This result will be derived in the model setting of Poisson's equation on a polygon, linear finite elements, and conforming triangulations created by newest vertex bisection.
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the...
The 2011 Northern Hemisphere Solar Maximum
Altrock, Richard C.
2013-01-01
Altrock (1997, Solar Phys. 170, 411) discusses a process in which Fe XIV 530.3 nm emission features appear at high latitudes and gradually migrate towards the equator, merging with the sunspot "butterfly diagram". In cycles 21 - 23 solar maximum occurred when the number of Fe XIV emission regions per day > 0.19 (averaged over 365 days and both hemispheres) first reached latitudes 18°, 21° and 21°, for an average of 20° ± 1.7°. Another high-latitude process is the "Rush to the Poles" of polar crown prominences and their associated coronal emission, including Fe XIV. The Rush is a harbinger of solar maximum (cf. Altrock, 2003, Solar Phys. 216, 343). Solar maximum in cycles 21 - 23 occurred when the center line of the Rush reached a critical latitude. These latitudes were 76°, 74° and 78°, respectively, for an average of 76° ± 2°. Cycle 24 displays an intermittent Rush that is only well-defined in the northern hemisphere. In 2009 an initial slope of 4.6°/yr was found in the north, compared to an average of 9.4 ± 1.7 °/yr in the previous three cycles. However, in 2010 the slope increased to 7.5°/yr. Extending that rate to 76° ± 2° indicates that the solar maximum smoothed sunspot number in the northern hemisphere already occurred at 2011.6 ± 0.3. In the southern hemisphere the Rush is very poorly defined. A linear fit to several maxima would reach 76° in the south at 2014.2. In 1999, persistent Fe XIV coronal emission connected with the ESC appeared near 70° in the north and began migrating towards the equator at a rate 40% slower than the previous two solar cycles. A fit to the early ESC would not reach 20° until 2019.8. However, in 2009 and 2010 an acceleration occurred. Currently the greatest number of emission regions is at 21° in the north and 24°in the south. This indicates that solar maximum is occurring now in the north but not yet in the south. The latest global smoothed sunspot numbers show an inflection point in late 2011, which
Accelerated gradient methods for constrained image deblurring
In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The effectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.
Maximum entropy distribution of stock price fluctuations
Bartiromo, Rosario
2013-04-01
In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.
Hard Instances of the Constrained Discrete Logarithm Problem
Mironov, Ilya; Mityagin, Anton; Nissim, Kobbi
2006-01-01
The discrete logarithm problem (DLP) generalizes to the constrained DLP, where the secret exponent $x$ belongs to a set known to the attacker. The complexity of generic algorithms for solving the constrained DLP depends on the choice of the set. Motivated by cryptographic applications, we study sets with succinct representation for which the constrained DLP is hard. We draw on earlier results due to Erd\\"os et al. and Schnorr, develop geometric tools such as generalized Menelaus' theorem for ...
A simple procedure for computing strong constrained egalitarian allocations
Francesc Llerena; Carles Rafels; Cori Vilella
2015-01-01
This paper deals with the strong constrained egalitarian solution introduced by Dutta and Ray (1991). We show that this solution yields the weak constrained egalitarian allocations (Dutta and Ray, 1989) associated to a finite family of convex games. This relationship makes it possible to define a systematic way of computing the strong constrained egalitarian allocations for any arbitrary game, using the well-known Dutta-Rayís algorithm for convex games. We also characterize non-emptiness and ...
Transient stability-constrained optimal power flow
Bettiol, Arlan; Ruiz-Vega, Daniel; Ernst, Damien; Wehenkel, Louis; Pavella, Mania
1999-01-01
This paper proposes a new approach able to maximize the interface flow limits in power systems and to find a new operating state that is secure with respect to both, dynamic (transient stability) and static security constraints. It combines the Maximum Allowable Transfer (MAT) method, recently developed for the simultaneous control of a set of contingencies, and an Optimal Power Flow (OPF) method for maximizing the interface power flow. The approach and its performances are illustrated by ...
Constrained Subjective Assessment of Student Learning
Saliu, Sokol
2005-09-01
Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a past course in microcomputer system design (MSD). CQA criteria were articulated in fuzzy terms and sets, and the assessment procedure was cast as a fuzzy inference rule base. An interactive graphic interface provided for transparent assessment, student "backwash," and support to the teacher when compiling the tests. Grade intervals, obtained from a departmental poll, were used to compile a fuzzy "grade" set. Assessment results were compared to those of a former standard method and to those of a modified version of it (but with fewer criteria). The three methods yielded similar results, supporting the application of CQA. The method improved assessment reliability by means of the consensus embedded in the fuzzy grade set, and improved assessment validity by integrating fuzzy criteria into the assessment procedure.
Constraining the roughness degree of slip heterogeneity
Causse, Mathieu
2010-05-07
This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.