KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method
International Nuclear Information System (INIS)
Westley, G.W.
1975-01-01
1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user
Directory of Open Access Journals (Sweden)
Nelson Maculan
2003-01-01
Full Text Available We present integer linear models with a polynomial number of variables and constraints for combinatorial optimization problems in graphs: optimum elementary cycles, optimum elementary paths and optimum tree problems.Apresentamos modelos lineares inteiros com um número polinomial de variáveis e restrições para problemas de otimização combinatória em grafos: ciclos elementares ótimos, caminhos elementares ótimos e problemas em árvores ótimas.
State control of discrete-time linear systems to be bound in state variables by equality constraints
International Nuclear Information System (INIS)
Filasová, Anna; Krokavec, Dušan; Serbák, Vladimír
2014-01-01
The paper is concerned with the problem of designing the discrete-time equivalent PI controller to control the discrete-time linear systems in such a way that the closed-loop state variables satisfy the prescribed equality constraints. Since the problem is generally singular, using standard form of the Lyapunov function and a symmetric positive definite slack matrix, the design conditions are proposed in the form of the enhanced Lyapunov inequality. The results, offering the conditions of the control existence and the optimal performance with respect to the prescribed equality constraints for square discrete-time linear systems, are illustrated with the numerical example to note effectiveness and applicability of the considered approach
Efficient Searching with Linear Constraints
DEFF Research Database (Denmark)
Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff
2000-01-01
We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...
Linear determining equations for differential constraints
International Nuclear Information System (INIS)
Kaptsov, O V
1998-01-01
A construction of differential constraints compatible with partial differential equations is considered. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the classical determining equations used in the search for admissible Lie operators. As applications of this approach equations of an ideal incompressible fluid and non-linear heat equations are discussed
Compact Spectrometers Based on Linear Variable Filters
National Aeronautics and Space Administration — Demonstrate a linear-variable spectrometer with an H2RG array. Linear Variable Filter (LVF) spectrometers provide attractive resource benefits – high optical...
Linear latent variable models: the lava-package
DEFF Research Database (Denmark)
Holst, Klaus Kähler; Budtz-Jørgensen, Esben
2013-01-01
are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available We consider the problem of minimizing a convex separable logarithmic function over a region defined by a convex inequality constraint or linear equality constraint, and two-sided bounds on the variables (box constraints. Such problems are interesting from both theoretical and practical point of view because they arise in some mathematical programming problems as well as in various practical problems such as problems of production planning and scheduling, allocation of resources, decision making, facility location problems, and so forth. Polynomial algorithms are proposed for solving problems of this form and their convergence is proved. Some examples and results of numerical experiments are also presented.
Linear-constraint wavefront control for exoplanet coronagraphic imaging systems
Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean
2017-01-01
A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.
Distance and slope constraints: adaptation and variability in golf putting.
Dias, Gonçalo; Couceiro, Micael S; Barreiros, João; Clemente, Filipe M; Mendes, Rui; Martins, Fernando M
2014-07-01
The main objective of this study is to understand the adaptation to external constraints and the effects of variability in a golf putting task. We describe the adaptation of relevant variables of golf putting to the distance to the hole and to the addition of a slope. The sample consisted of 10 adult male (33.80 ± 11.89 years), volunteers, right handed and highly skilled golfers with an average handicap of 10.82. Each player performed 30 putts at distances of 2, 3 and 4 meters (90 trials in Condition 1). The participants also performed 90 trials, at the same distances, with a constraint imposed by a slope (Condition 2). The results indicate that the players change some parameters to adjust to the task constraints, namely the duration of the backswing phase, the speed of the club head and the acceleration at the moment of impact with the ball. The effects of different golf putting distances in the no-slope condition on different kinematic variables suggest a linear adjustment to distance variation that was not observed when in the slope condition.
About the role of constraints in the linear relaxational behaviour of thermodynamic systems
Jongschaap, R.J.J.
1978-01-01
A formalism is presented by which the linear relaxational behaviour of thermodynamic systems can be described. Instead of using the concept of internal variables of state a set of so-called constraint equations is introduced. These equations represent structural properties of the system and turn out
Directory of Open Access Journals (Sweden)
Mariana Santos Matos Cavalca
2012-01-01
Full Text Available One of the main advantages of predictive control approaches is the capability of dealing explicitly with constraints on the manipulated and output variables. However, if the predictive control formulation does not consider model uncertainties, then the constraint satisfaction may be compromised. A solution for this inconvenience is to use robust model predictive control (RMPC strategies based on linear matrix inequalities (LMIs. However, LMI-based RMPC formulations typically consider only symmetric constraints. This paper proposes a method based on pseudoreferences to treat asymmetric output constraints in integrating SISO systems. Such technique guarantees robust constraint satisfaction and convergence of the state to the desired equilibrium point. A case study using numerical simulation indicates that satisfactory results can be achieved.
Design constraints for electron-positron linear colliders
International Nuclear Information System (INIS)
Mondelli, A.; Chernin, D.
1991-01-01
A prescription for examining the design constraints in the e + -e - linear collider is presented. By specifying limits on certain key quantities, an allowed region of parameter space can be presented, hopefully clarifying some of the design options. The model starts with the parameters at the interaction point (IP), where the expressions for the luminosity, the disruption parameter, beamstrahlung, and average beam power constitute four relations among eleven IP parameters. By specifying the values of five of these quantities, and using these relationships, the unknown parameter space can be reduced to a two-dimensional space. Curves of constraint can be plotted in this space to define an allowed operating region. An accelerator model, based on a modified, scaled SLAC structure, can then be used to derive the corresponding parameter space including the constraints derived from power consumption and wake field effects. The results show that longer, lower gradient accelerators are advantageous
Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.
Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S
2018-02-05
To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.
International Nuclear Information System (INIS)
Chen, H.-H.; Chen, C.-S.; Lee, C.-I
2009-01-01
This paper investigates the synchronization of unidirectional and bidirectional coupled unified chaotic systems. A balanced coupling coefficient control method is presented for global asymptotic synchronization using the Lyapunov stability theorem and a minimum scheme with no constraints/constraints. By using the result of the above analysis, the balanced coupling coefficients are then designed to achieve the chaos synchronization of linearly coupled unified chaotic systems. The feasibility and effectiveness of the proposed chaos synchronization scheme are verified via numerical simulations.
Singular Linear Differential Equations in Two Variables
Braaksma, B.L.J.; Put, M. van der
2008-01-01
The formal and analytic classification of integrable singular linear differential equations has been studied among others by R. Gerard and Y. Sibuya. We provide a simple proof of their main result, namely: For certain irregular systems in two variables there is no Stokes phenomenon, i.e. there is no
International Nuclear Information System (INIS)
Liu, Xiaolan; Zhou, Mi
2016-01-01
In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.
Understanding constraint release in star/linear polymer blends
Shivokhin, M. E.
2014-04-08
In this paper, we exploit the stochastic slip-spring model to quantitatively predict the stress relaxation dynamics of star/linear blends with well-separated longest relaxation times and we analyze the results to assess the validity limits of the two main models describing the corresponding relaxation mechanisms within the framework of the tube picture (Doi\\'s tube dilation and Viovy\\'s constraint release by Rouse motions of the tube). Our main objective is to understand and model the stress relaxation function of the star component in the blend. To this end, we divide its relaxation function into three zones, each of them corresponding to a different dominating relaxation mechanism. After the initial fast Rouse motions, relaxation of the star is dominated at intermediate times by the "skinny" tube (made by all topological constraints) followed by exploration of the "fat" tube (made by long-lived obstacles only). At longer times, the tube dilation picture provides the right shape for the relaxation of the stars. However, the effect of short linear chains results in time-shift factors that have never been described before. On the basis of the analysis of the different friction coefficients involved in the relaxation of the star chains, we propose an equation predicting these time-shift factors. This allows us to develop an analytical equation combining all relaxation zones, which is verified by comparison with simulation results. © 2014 American Chemical Society.
Variables as Contextual Constraints in Translating Irony
Directory of Open Access Journals (Sweden)
Babîi Oana
2015-06-01
Full Text Available The translator’s role and responsibility are high in any act of interlingual communication, and even higher when irony, an indirect and deliberately elusive form of communication, is involved in the translation process. By allowing more than one possible interpretation, irony is inevitably exposed to the risk of being misunderstood. This paper attempts to capture the complexity of translating irony, making use of theoretical frameworks provided by literary studies and translation studies. It analyses if and how the types of irony, the literary genres and the cultural, normative factors, perceived as potential contextual constraints, have an impact on the translator’ choices in rendering irony in translation, taking illustrative examples from Jonathan Swift, Oscar Wilde, Aldous Huxley and David Lodge’s works.
Signal Enhancement with Variable Span Linear Filters
DEFF Research Database (Denmark)
Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Jesper Rindom
This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed....... Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal......-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both...
Signal enhancement with variable span linear filters
Benesty, Jacob; Jensen, Jesper R
2016-01-01
This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed. Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both the time and STFT domains, and, lastly, in time-domain binaural enhancement. In these contexts, the properties of ...
Constraint-Led Changes in Internal Variability in Running
Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich
2012-01-01
We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % ...
Signal Enhancement with Variable Span Linear Filters
DEFF Research Database (Denmark)
Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Jesper Rindom
. Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal...... the time and STFT domains, and, lastly, in time-domain binaural enhancement. In these contexts, the properties of these filters are analyzed in terms of their noise reduction capabilities and desired signal distortion, and the analyses are validated and further explored in simulations....
Constraint-led changes in internal variability in running.
Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich
2012-01-01
We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered.
Design variables and constraints in fashion store design processes
DEFF Research Database (Denmark)
Haug, Anders; Borch Münster, Mia
2015-01-01
is to identify the most important store design variables, organise these variables into categories, understand the design constraints between categories, and determine the most influential stakeholders. Design/methodology/approach: – Based on a discussion of existing literature, the paper defines a framework...... into categories, provides an understanding of constraints between categories of variables, and identifies the most influential stakeholders. The paper demonstrates that the fashion store design task can be understood through a system perspective, implying that the store design task becomes a matter of defining......Purpose: – Several frameworks of retail store environment variables exist, but as shown by this paper, they are not particularly well-suited for supporting fashion store design processes. Thus, in order to provide an improved understanding of fashion store design, the purpose of this paper...
Linear odd Poisson bracket on Grassmann variables
International Nuclear Information System (INIS)
Soroka, V.A.
1999-01-01
A linear odd Poisson bracket (antibracket) realized solely in terms of Grassmann variables is suggested. It is revealed that the bracket, which corresponds to a semi-simple Lie group, has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, the second and the third orders with respect to Grassmann derivatives, in contrast with the canonical odd Poisson bracket having the only Grassmann-odd nilpotent differential Δ-operator of the second order. It is shown that these Δ-like operators together with a Grassmann-odd nilpotent Casimir function of this bracket form a finite-dimensional Lie superalgebra. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Understanding constraint release in star/linear polymer blends
Shivokhin, M. E.; Van Ruymbeke, Evelyne; Bailly, Christian M E; Kouloumasis, D.; Hadjichristidis, Nikolaos; Likhtman, Alexei E.
2014-01-01
of the two main models describing the corresponding relaxation mechanisms within the framework of the tube picture (Doi's tube dilation and Viovy's constraint release by Rouse motions of the tube). Our main objective is to understand and model the stress
DEFF Research Database (Denmark)
Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias
2010-01-01
A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
A feasible DY conjugate gradient method for linear equality constraints
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Linear Viscoelasticity, Reptation, Chain Stretching and Constraint Release
DEFF Research Database (Denmark)
Neergaard, Jesper; Schieber, Jay D.; Venerus, David C.
2000-01-01
A recently proposed self-consistent reptation model - alreadysuccessful at describing highly nonlinear shearing flows of manytypes using no adjustable parameters - is used here to interpretthe linear viscoelasticity of the same entangled polystyrenesolution. Using standard techniques, a relaxatio...
A method for computing the stationary points of a function subject to linear equality constraints
International Nuclear Information System (INIS)
Uko, U.L.
1989-09-01
We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs
Emergent constraint on equilibrium climate sensitivity from global temperature variability.
Cox, Peter M; Huntingford, Chris; Williamson, Mark S
2018-01-17
Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO 2 ) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO 2 . Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the 'likely' range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5-4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC 'likely' range) of 2.2-3.4 degrees Celsius. Our approach is to focus on the variability of temperature about long-term historical warming, rather than on the warming trend itself. We use an ensemble of climate models to define an emergent relationship between ECS and a theoretically informed metric of global temperature variability. This metric of variability can also be calculated from observational records of global warming, which enables tighter constraints to be placed on ECS, reducing the probability of ECS being less than 1.5 degrees Celsius to less than 3 per cent, and the probability of ECS exceeding 4.5 degrees Celsius to less than 1 per cent.
Emergent constraint on equilibrium climate sensitivity from global temperature variability
Cox, Peter M.; Huntingford, Chris; Williamson, Mark S.
2018-01-01
Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO2) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO2. Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the ‘likely’ range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5-4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC ‘likely’ range) of 2.2-3.4 degrees Celsius. Our approach is to focus on the variability of temperature about long-term historical warming, rather than on the warming trend itself. We use an ensemble of climate models to define an emergent relationship between ECS and a theoretically informed metric of global temperature variability. This metric of variability can also be calculated from observational records of global warming, which enables tighter constraints to be placed on ECS, reducing the probability of ECS being less than 1.5 degrees Celsius to less than 3 per cent, and the probability of ECS exceeding 4.5 degrees Celsius to less than 1 per cent.
How Robust Is Linear Regression with Dummy Variables?
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
International Nuclear Information System (INIS)
Winicour, Jeffrey
2017-01-01
An algebraic-hyperbolic method for solving the Hamiltonian and momentum constraints has recently been shown to be well posed for general nonlinear perturbations of the initial data for a Schwarzschild black hole. This is a new approach to solving the constraints of Einstein’s equations which does not involve elliptic equations and has potential importance for the construction of binary black hole data. In order to shed light on the underpinnings of this approach, we consider its application to obtain solutions of the constraints for linearized perturbations of Minkowski space. In that case, we find the surprising result that there are no suitable Cauchy hypersurfaces in Minkowski space for which the linearized algebraic-hyperbolic constraint problem is well posed. (note)
Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization
Directory of Open Access Journals (Sweden)
Xiaobing Kong
2013-01-01
Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.
The regular indefinite linear-quadratic problem with linear endpoint constraints
Soethoudt, J.M.; Trentelman, H.L.
1989-01-01
This paper deals with the infinite horizon linear-quadratic problem with indefinite cost. Given a linear system, a quadratic cost functional and a subspace of the state space, we consider the problem of minimizing the cost functional over all inputs for which the state trajectory converges to that
Variable-energy drift-tube linear accelerator
Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.
1984-01-01
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
A Partitioning and Bounded Variable Algorithm for Linear Programming
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs
Zuidwijk, Rob
2005-01-01
textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...
Sensitivity theory for general non-linear algebraic equations with constraints
International Nuclear Information System (INIS)
Oblow, E.M.
1977-04-01
Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems
Shape optimization of a perforated pressure vessel cover under linearized stress constraints
International Nuclear Information System (INIS)
Choi, Woo-Seok; Kim, Tae-Wan; Seo, Ki-Seog
2008-01-01
One of the general methods to evaluate a failure condition is to compare a maximum stress with an allowable stress. A failure condition for a stress is usually applied to a concerned point rather than a concerned section. In an optimization procedure, these stress conditions are applied as constraints. But the ASME code that prescribes its general rules upon the design of a NSSS (nuclear steam supply system) has quite a different view on a failure condition. According to the ASME code Sec. III, a stress linearization should be performed to evaluate a failure condition of a structure. Since a few programs provide a procedure for a stress linearization through a post-processing stage, an extra calculation of the linearized stresses and the derivatives of a linearized stress are conducted to adopt the stress linearization results to an optimization procedure as constraints. In this research, an optimization technique that utilizes the results of a stress linearization as a constraint is proposed. The proposed method was applied to the shape design of a perforated pressure vessel cover
DEFF Research Database (Denmark)
Fränzle, Martin; Herde, Christian
2003-01-01
We investigate the problem of generalizing acceleration techniques as found in recent satisfiability engines for conjunctive normal forms (CNFs) to linear constraint systems over the Booleans. The rationale behind this research is that rewriting the propositional formulae occurring in e.g. bounde...
Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs
R.A. Zuidwijk (Rob)
2005-01-01
textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.
2012-01-01
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Noise Reduction with Optimal Variable Span Linear Filters
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Benesty, Jacob; Christensen, Mads Græsbøll
2016-01-01
In this paper, the problem of noise reduction is addressed as a linear filtering problem in a novel way by using concepts from subspace-based enhancement methods, resulting in variable span linear filters. This is done by forming the filter coefficients as linear combinations of a number...... included in forming the filter. Using these concepts, a number of different filter designs are considered, like minimum distortion, Wiener, maximum SNR, and tradeoff filters. Interestingly, all these can be expressed as special cases of variable span filters. We also derive expressions for the speech...... demonstrate the advantages and properties of the variable span filter designs, and their potential performance gain compared to widely used speech enhancement methods....
Free piston variable-stroke linear-alternator generator
Haaland, Carsten M.
1998-01-01
A free-piston variable stroke linear-alternator AC power generator for a combustion engine. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
[Relations between biomedical variables: mathematical analysis or linear algebra?].
Hucher, M; Berlie, J; Brunet, M
1977-01-01
The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Jian Jinbao; Li Jianling; Mo Xingde
2006-01-01
This paper discusses a kind of optimization problem with linear complementarity constraints, and presents a sequential quadratic programming (SQP) algorithm for solving a stationary point of the problem. The algorithm is a modification of the SQP algorithm proposed by Fukushima et al. [Computational Optimization and Applications, 10 (1998),5-34], and is based on a reformulation of complementarity condition as a system of linear equations. At each iteration, one quadratic programming and one system of equations needs to be solved, and a curve search is used to yield the step size. Under some appropriate assumptions, including the lower-level strict complementarity, but without the upper-level strict complementarity for the inequality constraints, the algorithm is proved to possess strong convergence and superlinear convergence. Some preliminary numerical results are reported
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differen...
Linear Pursuit Differential Game under Phase Constraint on the State of Evader
Directory of Open Access Journals (Sweden)
Askar Rakhmanov
2016-01-01
Full Text Available We consider a linear pursuit differential game of one pursuer and one evader. Controls of the pursuer and evader are subjected to integral and geometric constraints, respectively. In addition, phase constraint is imposed on the state of evader, whereas pursuer moves throughout the space. We say that pursuit is completed, if inclusion y(t1-x(t1∈M is satisfied at some t1>0, where x(t and y(t are states of pursuer and evader, respectively, and M is terminal set. Conditions of completion of pursuit in the game from all initial points of players are obtained. Strategy of the pursuer is constructed so that the phase vector of the pursuer first is brought to a given set, and then pursuit is completed.
Exhaustive Search for Sparse Variable Selection in Linear Regression
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
LINTAB, Linear Interpolable Tables from any Continuous Variable Function
International Nuclear Information System (INIS)
1988-01-01
1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range
Local energy decay for linear wave equations with variable coefficients
Ikehata, Ryo
2005-06-01
A uniform local energy decay result is derived to the linear wave equation with spatial variable coefficients. We deal with this equation in an exterior domain with a star-shaped complement. Our advantage is that we do not assume any compactness of the support on the initial data, and its proof is quite simple. This generalizes a previous famous result due to Morawetz [The decay of solutions of the exterior initial-boundary value problem for the wave equation, Comm. Pure Appl. Math. 14 (1961) 561-568]. In order to prove local energy decay, we mainly apply two types of ideas due to Ikehata-Matsuyama [L2-behaviour of solutions to the linear heat and wave equations in exterior domains, Sci. Math. Japon. 55 (2002) 33-42] and Todorova-Yordanov [Critical exponent for a nonlinear wave equation with damping, J. Differential Equations 174 (2001) 464-489].
Memory State Feedback RMPC for Multiple Time-Delayed Uncertain Linear Systems with Input Constraints
Directory of Open Access Journals (Sweden)
Wei-Wei Qin
2014-01-01
Full Text Available This paper focuses on the problem of asymptotic stabilization for a class of discrete-time multiple time-delayed uncertain linear systems with input constraints. Then, based on the predictive control principle of receding horizon optimization, a delayed state dependent quadratic function is considered for incorporating MPC problem formulation. By developing a memory state feedback controller, the information of the delayed plant states can be taken into full consideration. The MPC problem is formulated to minimize the upper bound of infinite horizon cost that satisfies the sufficient conditions. Then, based on the Lyapunov-Krasovskii function, a delay-dependent sufficient condition in terms of linear matrix inequality (LMI can be derived to design a robust MPC algorithm. Finally, the digital simulation results prove availability of the proposed method.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
König Ignasiak, Niklas; Habermacher, Lars; Taylor, William R; Singh, Navrag B
2017-01-01
Motor variability is an inherent feature of all human movements and reflects the quality of functional task performance. Depending on the requirements of the motor task, the human sensory-motor system is thought to be able to flexibly govern the appropriate level of variability. However, it remains unclear which neurophysiological structures are responsible for the control of motor variability. In this study, we tested the contribution of cortical cognitive resources on the control of motor variability (in this case postural sway) using a dual-task paradigm and furthermore observed potential changes in control strategy by evaluating Ia-afferent integration (H-reflex). Twenty healthy subjects were instructed to stand relaxed on a force plate with eyes open and closed, as well as while trying to minimize sway magnitude and performing a "subtracting-sevens" cognitive task. In total 25 linear and non-linear parameters were used to evaluate postural sway, which were combined using a Principal Components procedure. Neurophysiological response of Ia-afferent reflex loop was quantified using the Hoffman reflex. In order to assess the contribution of the H-reflex on the sway outcome in the different standing conditions multiple mixed-model ANCOVAs were performed. The results suggest that subjects were unable to further minimize their sway, despite actively focusing to do so. The dual-task had a destabilizing effect on PS, which could partly (by 4%) be counter-balanced by increasing reliance on Ia-afferent information. The effect of the dual-task was larger than the protective mechanism of increasing Ia-afferent information. We, therefore, conclude that cortical structures, as compared to peripheral reflex loops, play a dominant role in the control of motor variability.
Directory of Open Access Journals (Sweden)
Huapeng Yu
2015-02-01
Full Text Available The Kalman filter (KF has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically.
DEFF Research Database (Denmark)
Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero
2015-01-01
In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length
DEFF Research Database (Denmark)
Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.
2013-01-01
Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...
Czech Academy of Sciences Publication Activity Database
Červinka, Michal
2010-01-01
Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf
Bayesian Optimization Under Mixed Constraints with A Slack-Variable Augmented Lagrangian
Energy Technology Data Exchange (ETDEWEB)
Picheny, Victor; Gramacy, Robert B.; Wild, Stefan M.; Le Digabel, Sebastien
2016-12-05
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof. We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples.
Input-to-State Stabilizing MPC for Neutrally Stable Linear Systems subject to Input Constraints
Kim, Jung-Su; Yoon, Tae-Woong; Jadbabaie, Ali; Persis, Claudio De
2004-01-01
MPC(Model Predictive Control) is representative of control methods which are able to handle physical constraints. Closed-loop stability can therefore be ensured only locally in the presence of constraints of this type. However, if the system is neutrally stable, and if the constraints are imposed
Wei, Peng; Sridhar, Banavar; Chen, Neil Yi-Nan; Sun, Dengfent
2012-01-01
A class of strategies has been proposed to reduce contrail formation in the United States airspace. A 3D grid based on weather data and the cruising altitude level of aircraft is adjusted to avoid the persistent contrail potential area with the consideration to fuel-efficiency. In this paper, the authors introduce a contrail avoidance strategy on 3D grid by considering additional operationally feasible constraints from an air traffic controller's aspect. First, shifting too many aircraft to the same cruising level will make the miles-in-trail at this level smaller than the safety separation threshold. Furthermore, the high density of aircraft at one cruising level may exceed the workload for the traffic controller. Therefore, in our new model we restrict the number of total aircraft at each level. Second, the aircraft count variation for successive intervals cannot be too drastic since the workload to manage climbing/descending aircraft is much larger than managing cruising aircraft. The contrail reduction is formulated as an integer-programming problem and the problem is shown to have the property of total unimodularity. Solving the corresponding relaxed linear programming with the simplex method provides an optimal and integral solution to the problem. Simulation results are provided to illustrate the methodology.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
José Carlos Ortiz-Bayliss
2018-01-01
Full Text Available When solving constraint satisfaction problems (CSPs, it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems.
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
Linear variable voltage diode capacitor and adaptive matching networks
Larson, L.E.; De Vreede, L.C.N.
2006-01-01
An integrated variable voltage diode capacitor topology applied to a circuit providing a variable voltage load for controlling variable capacitance. The topology includes a first pair of anti-series varactor diodes, wherein the diode power-law exponent n for the first pair of anti-series varactor
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Variable selection in multiple linear regression: The influence of ...
African Journals Online (AJOL)
provide an indication of whether the fit of the selected model improves or ... and calculate M(−i); quantify the influence of case i in terms of a function, f(•), of M and ..... [21] Venter JH & Snyman JLJ, 1997, Linear model selection based on risk ...
Darmon, Nicole; Ferguson, Elaine L; Briend, André
2006-01-01
To predict, for French women, the impact of a cost constraint on the food choices required to provide a nutritionally adequate diet. Isocaloric daily diets fulfilling both palatability and nutritional constraints were modeled in linear programming, using different cost constraint levels. For each modeled diet, total departure from an observed French population's average food group pattern ("mean observed diet") was minimized. To achieve the nutritional recommendations without a cost constraint, the modeled diet provided more energy from fish, fresh fruits and green vegetables and less energy from animal fats and cheese than the "mean observed diet." Introducing and strengthening a cost constraint decreased the energy provided by meat, fresh vegetables, fresh fruits, vegetable fat, and yogurts and increased the energy from processed meat, eggs, offal, and milk. For the lowest cost diet (ie, 3.18 euros/d), marked changes from the "mean observed diet" were required, including a marked reduction in the amount of energy from fresh fruits (-85%) and green vegetables (-70%), and an increase in the amount of energy from nuts, dried fruits, roots, legumes, and fruit juices. Nutrition education for low-income French women must emphasize these affordable food choices.
Interpreting Multiple Linear Regression: A Guidebook of Variable Importance
Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim
2012-01-01
Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…
Supporting Students' Understanding of Linear Equations with One Variable Using Algebra Tiles
Saraswati, Sari; Putri, Ratu Ilma Indra; Somakim
2016-01-01
This research aimed to describe how algebra tiles can support students' understanding of linear equations with one variable. This article is a part of a larger research on learning design of linear equations with one variable using algebra tiles combined with balancing method. Therefore, it will merely discuss one activity focused on how students…
Short- and long-term variations in non-linear dynamics of heart rate variability
DEFF Research Database (Denmark)
Kanters, J K; Højgaard, M V; Agner, E
1996-01-01
OBJECTIVES: The purpose of the study was to investigate the short- and long-term variations in the non-linear dynamics of heart rate variability, and to determine the relationships between conventional time and frequency domain methods and the newer non-linear methods of characterizing heart rate...... rate and describes mainly linear correlations. Non-linear predictability is correlated with heart rate variability measured as the standard deviation of the R-R intervals and the respiratory activity expressed as power of the high-frequency band. The dynamics of heart rate variability changes suddenly...
Efficient Solving of Large Non-linear Arithmetic Constraint Systems with Complex Boolean Structure
Czech Academy of Sciences Publication Activity Database
Fränzle, M.; Herde, C.; Teige, T.; Ratschan, Stefan; Schubert, T.
2007-01-01
Roč. 1, - (2007), s. 209-236 ISSN 1574-0617 Grant - others:AVACS(DE) SFB/TR 14 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval-based arithmetic constraint solving * SAT modulo theories Subject RIV: BA - General Mathematics
Non-linear variability in geophysics scaling and fractals
Lovejoy, S
1991-01-01
consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...
Solving quantum optimal control problems using Clebsch variables and Lin constraints
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
DEFF Research Database (Denmark)
Tanev, George; Saadi, Dorthe Bodholt; Hoppe, Karsten
2014-01-01
Chronic stress detection is an important factor in predicting and reducing the risk of cardiovascular disease. This work is a pilot study with a focus on developing a method for detecting short-term psychophysiological changes through heart rate variability (HRV) features. The purpose of this pilot...... study is to establish and to gain insight on a set of features that could be used to detect psychophysiological changes that occur during chronic stress. This study elicited four different types of arousal by images, sounds, mental tasks and rest, and classified them using linear and non-linear HRV...
Shivokhin, Maksim E.
2017-05-30
We propose and verify methods based on the slip-spring (SSp) model [ Macromolecules 2005, 38, 14 ] for predicting the effect of any monodisperse, binary, or ternary environment of topological constraints on the relaxation of the end-to-end vector of a linear probe chain. For this purpose we first validate the ability of the model to consistently predict both the viscoelastic and dielectric response of monodisperse and binary mixtures of type A polymers, based on published experimental data. We also report the synthesis of new binary and ternary polybutadiene systems, the measurement of their linear viscoelastic response, and the prediction of these data by the SSp model. We next clarify the relaxation mechanisms of probe chains in these constraint release (CR) environments by analyzing a set of "toy" SSp models with simplified constraint release rates, by examining fluctuations of the end-to-end vector. In our analysis, the longest relaxation time of the probe chain is determined by a competition between the longest relaxation times of the effective CR motions of the fat and thin tubes and the motion of the chain itself in the thin tube. This picture is tested by the analysis of four model systems designed to separate and estimate every single contribution involved in the relaxation of the probe\\'s end-to-end vector in polydisperse systems. We follow the CR picture of Viovy et al. [ Macromolecules 1991, 24, 3587 ] and refine the effective chain friction in the thin and fat tubes based on Read et al. [ J. Rheol. 2012, 56, 823 ]. The derived analytical equations form a basis for generalizing the proposed methodology to polydisperse mixtures of linear and branched polymers. The consistency between the SSp model and tube model predictions is a strong indicator of the compatibility between these two distinct mesoscopic frameworks.
Directory of Open Access Journals (Sweden)
Zhifeng Dai
2014-01-01
Full Text Available Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1 search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2 the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Severe linear growth retardation in rural Zambian children: the influence of biological variables.
Hautvast, J.L.A.; Tolboom, J.J.M.; Kaftwembe, E.M.; Musonda, R.M.; Mwanakasale, V.; Staveren, W.A. van; Hof, M.A. van 't; Sauerwein, R.W.; Willems, J.L.; Monnens, L.A.H.
2000-01-01
BACKGROUND: The prevalence of stunting in preschool children in Zambia is high; stunting has detrimental effects on concurrent psychomotor development and later working capacity. OBJECTIVE: Our objective was to investigate biological variables that may contribute to linear growth retardation in
SUPPORTING STUDENTS’ UNDERSTANDING OF LINEAR EQUATIONS WITH ONE VARIABLE USING ALGEBRA TILES
Directory of Open Access Journals (Sweden)
Sari Saraswati
2016-01-01
Full Text Available This research aimed to describe how algebra tiles can support students’ understanding of linear equations with one variable. This article is a part of a larger research on learning design of linear equations with one variable using algebra tiles combined with balancing method. Therefore, it will merely discuss one activity focused on how students use the algebra tiles to find a method to solve linear equations with one variable. Design research was used as an approach in this study. It consists of three phases, namely preliminary design, teaching experiment and retrospective analysis. Video registrations, students’ written works, pre-test, post-test, field notes, and interview are technic to collect data. The data were analyzed by comparing the hypothetical learning trajectory (HLT and the actual learning process. The result shows that algebra tiles could supports students’ understanding to find the formal solution of linear equation with one variable.
Directory of Open Access Journals (Sweden)
Marcos Fernández-Martínez
2017-11-01
Full Text Available Mast seeding, the extremely variable and synchronized production of fruits, is a common reproductive behavior in plants. Weather is centrally involved in driving masting. Yet, it is often claimed that it cannot be the sole proximate cause of masting because weather is less variable than fruit production and because the shape of their distributions differ. We used computer simulations to demonstrate that the assumption that weather cannot be the main driver of masting was only valid for linear relationships between weather and fruit production. Non-linear relationships between interannual variability in weather and crop size, however, can account for the differences in their variability and the shape of their distributions because of Jensen's inequality. Exponential relationships with weather can increase the variability of fruit production, and sigmoidal relationships can produce bimodal distributions. These results challenge the idea that meteorological variability cannot be the main proximate driver of mast seeding, returning meteorological variability to the forefront of masting research.
The effect of workload constraints in linear programming models for production planning
Jansen, M.M.; Kok, de A.G.; Adan, I.J.B.F.
2011-01-01
Linear programming (LP) models for production planning incorporate a model of the manufacturing system that is necessarily deterministic. Although these deterministic models are the current state-of-the-art, it should be recognized that they are used in an environment that is inherently stochastic.
The number of subjects per variable required in linear regression analyses
P.C. Austin (Peter); E.W. Steyerberg (Ewout)
2015-01-01
textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression
Approximation of functions in two variables by some linear positive operators
Directory of Open Access Journals (Sweden)
Mariola Skorupka
1995-12-01
Full Text Available We introduce some linear positive operators of the Szasz-Mirakjan type in the weighted spaces of continuous functions in two variables. We study the degree of the approximation of functions by these operators. The similar results for functions in one variable are given in [5]. Some operators of the Szasz-Mirakjan type are examined also in [3], [4].
Akbulut, Yavuz
2007-01-01
Factors predicting vocabulary learning and reading comprehension of advanced language learners of English in a linear multimedia text were investigated in the current study. Predictor variables of interest were multimedia type, reading proficiency, learning styles, topic interest and background knowledge about the topic. The outcome variables of…
Fuzzy solution of the linear programming problem with interval coefficients in the constraints
Dorota Kuchta
2005-01-01
A fuzzy concept of solving the linear programming problem with interval coefficients is proposed. For each optimism level of the decision maker (where the optimism concerns the certainty that no errors have been committed in the estimation of the interval coefficients and the belief that optimistic realisations of the interval coefficients will occur) another interval solution of the problem will be generated and the decision maker will be able to choose the final solution having a complete v...
A fresh look at linear cosmological constraints on a decaying Dark Matter component
Energy Technology Data Exchange (ETDEWEB)
Poulin, Vivian; Serpico, Pasquale D. [LAPTh, Université Savoie Mont Blanc and CNRS, BP 110, Annecy-le-Vieux Cedex, F-74941 France (France); Lesgourgues, Julien, E-mail: Vivian.Poulin@lapth.cnrs.fr, E-mail: Pasquale.Serpico@lapth.cnrs.fr, E-mail: Julien.Lesgourgues@physik.rwth-aachen.de [Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, Sommerfeld str. 16, Aachen, D-52056 Germany (Germany)
2016-08-01
We consider a cosmological model in which a fraction f {sub dcdm} of the Dark Matter (DM) is allowed to decay in an invisible relativistic component, and compute the resulting constraints on both the decay width (or inverse lifetime) Γ{sub dcdm} and f {sub dcdm} from purely gravitational arguments. We report a full derivation of the Boltzmann hierarchy, correcting a mistake in previous literature, and compute the impact of the decay—as a function of the lifetime—on the CMB and matter power spectra. From CMB only, we obtain that no more than 3.8% of the DM could have decayed in the time between recombination and today (all bounds quoted at 95% CL). We also comment on the important application of this bound to the case where primordial black holes constitute DM, a scenario notoriously difficult to constrain. For lifetimes longer than the age of the Universe, the bounds can be cast as f {sub dcdm}Γ{sub dcdm} < 6.3×10{sup -3} Gyr{sup -1}. For the first time, we also checked that degeneracies with massive neutrinos are broken when information from the large scale structure is used. Even secondary effects like CMB lensing suffice to this purpose. Decaying DM models have been invoked to solve a possible tension between low redshift astronomical measurements of σ{sub 8} and Ω{sub m} and the ones inferred by Planck. We reassess this claim finding that with the most recent BAO, HST and σ{sub 8} data extracted from the CFHT survey, the tension is only slightly reduced despite the two additional free parameters. Nonetheless, the existing tension explains why the bound on f {sub dcdm}Γ{sub dcdm} loosens to f {sub dcdm}Γ{sub dcdm} < 15.9×10{sup -3} Gyr{sup -1} when including such additional data. The bound however improves to f {sub dcdm}Γ{sub dcdm} < 5.9 ×10{sup -3} Gyr{sup -1} if only data consistent with the CMB are included. This highlights the importance of establishing whether the tension is due to real physical effects or unaccounted systematics, for
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
SUPPORTING STUDENTS’ UNDERSTANDING OF LINEAR EQUATIONS WITH ONE VARIABLE USING ALGEBRA TILES
Directory of Open Access Journals (Sweden)
Sari Saraswati
2016-01-01
Full Text Available This research aimed to describe how algebra tiles can support students’ understanding of linear equations with one variable. This article is a part of a larger research on learning design of linear equations with one variable using algebra tiles combined with balancing method. Therefore, it will merely discuss one activity focused on how students use the algebra tiles to find a method to solve linear equations with one variable. Design research was used as an approach in this study. It consists of three phases, namely preliminary design, teaching experiment and retrospective analysis. Video registrations, students’ written works, pre-test, post-test, field notes, and interview are technic to collect data. The data were analyzed by comparing the hypothetical learning trajectory (HLT and the actual learning process. The result shows that algebra tiles could supports students’ understanding to find the formal solution of linear equation with one variable.Keywords: linear equation with one variable, algebra tiles, design research, balancing method, HLT DOI: http://dx.doi.org/10.22342/jme.7.1.2814.19-30
Latest astronomical constraints on some non-linear parametric dark energy models
Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos
2018-04-01
We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.
Directory of Open Access Journals (Sweden)
Prasenjit D. Wakode
2016-07-01
Full Text Available This paper presents the complete analysis of Linear Induction Motor (LIM under VVVF. The complete variation of LIM air gap flux under ‘blocked Linor’ condition and starting force is analyzed and presented when LIM is given VVVF supply. The analysis of this data is important in further understanding of the equivalent circuit parameters of LIM and to study the magnetic circuit of LIM. The variation of these parameters is important to know the LIM response at different frequencies. The simulation and application of different control strategies such as vector control thus becomes quite easy to apply and understand motor’s response under such strategy of control.
Directory of Open Access Journals (Sweden)
Salvador Lucas
2015-12-01
Full Text Available Recent developments in termination analysis for declarative programs emphasize the use of appropriate models for the logical theory representing the program at stake as a generic approach to prove termination of declarative programs. In this setting, Order-Sorted First-Order Logic provides a powerful framework to represent declarative programs. It also provides a target logic to obtain models for other logics via transformations. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis, in particular in termination analysis of declarative programs. We use convex domains to give domains to the different sorts of an order-sorted signature; we interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra and arithmetic constraint solving to synthesize the models.
Linear variable differential transformer sensor using glass-covered amorphous wires as active core
International Nuclear Information System (INIS)
Chiriac, H.; Hristoforou, E.; Neagu, Maria; Pieptanariu, M.
2000-01-01
Results concerning linear variable differential transformer (LVDT) displacement sensor using as movable core glass-covered amorphous wires are presented. The LVDT response is linear for a displacement of the movable core up to about 14 mm, with an accuracy of 1 μm. LVDT using glass-covered amorphous wire as an active core presents a high sensitivity and good mechanical and corrosion resistance
Low voltage RF MEMS variable capacitor with linear C-V response
Elshurafa, Amro M.; Ho, Pak Hung; Salama, Khaled N.
2012-01-01
.2:1 and was achieved at an actuation DC voltage of 8V only. Further, the linear regression coefficient was 0.98. The variable capacitor was created such that it has both vertical and horizontal capacitances present. As the top suspended plate moves towards the bottom
Non-commutative linear algebra and plurisubharmonic functions of quaternionic variables
Alesker, Semyon
2003-01-01
We recall known and establish new properties of the Dieudonn\\'e and Moore determinants of quaternionic matrices.Using these linear algebraic results we develop a basic theory of plurisubharmonic functions of quaternionic variables. Then we introduce and briefly discuss quaternionic Monge-Amp\\'ere equations.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Brigatti, M. F.; Elmi, C.; Laurora, A.; Malferrari, D.; Medici, L.
2009-04-01
An extremely severe aspect, both from environmental and economic viewpoint, is the management of polluted sediments removed from drainage and irrigation canals. Canals, in order to retain their functionality over the time, need to have their beds, periodically cleaned from sediments there accumulating. The management of removed sediments is extremely demanding, also from an economical perspective, if these latter needs to be treated as dangerous waste materials, as stated in numerous international standards. Furthermore the disposal of such a large amount of material may introduce a significant environmental impact as well. An appealing alternative is the recovery or reuse of these materials, for example in brick and tile industry, after obviously the application of appropriate techniques and protocols that could render these latter no longer a threat for human health. The assessment of the effective potential danger for human health and ecosystem of sediments before and after treatment obviously requires both a careful chemical and mineralogical characterization and, even if not always considered in the international standards, the definition of the coordination shell of heavy metals dangerous for human health, as a function of their oxidation state and coordination (e.g. Cr and Pb), and introducing technological constraints or affecting the features of the end products. Fe is a good representative for this second category, as the features of the end product, such as color, strongly depend not only from Fe concentration but also from its oxidation state, speciation and coordination. This work will first of all provide mineralogical characterization of sediments from various sampling points of irrigation and drainage canals of Po river region in the north-eastern of Italy. Samples were investigated with various approaches including X-ray powder diffraction under non-ambient conditions, thermal analysis and EXAFS spectroscopy. Obtained results, and in particular
Constraints on the atmospheric circulation and variability of the eccentric hot Jupiter XO-3b
Energy Technology Data Exchange (ETDEWEB)
Wong, Ian; Knutson, Heather A. [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Cowan, Nicolas B. [Center for Interdisciplinary Exploration and Astrophysics (CIERA), Department of Earth and Planetary Sciences, Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States); Lewis, Nikole K. [Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Agol, Eric [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Burrows, Adam [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Deming, Drake [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Fortney, Jonathan J.; Laughlin, Gregory [Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95604 (United States); Fulton, Benjamin J. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); Langton, Jonathan [Department of Physics, Principia College, Elsah, IL 62028 (United States); Showman, Adam P., E-mail: iwong@caltech.edu [Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721 (United States)
2014-10-20
We report secondary eclipse photometry of the hot Jupiter XO-3b in the 4.5 μm band taken with the Infrared Array Camera on the Spitzer Space Telescope. We measure individual eclipse depths and center of eclipse times for a total of 12 secondary eclipses. We fit these data simultaneously with two transits observed in the same band in order to obtain a global best-fit secondary eclipse depth of 0.1580% ± 0.0036% and a center of eclipse phase of 0.67004 ± 0.00013. We assess the relative magnitude of variations in the dayside brightness of the planet by measuring the size of the residuals during ingress and egress from fitting the combined eclipse light curve with a uniform disk model and place an upper limit of 0.05%. The new secondary eclipse observations extend the total baseline from one and a half years to nearly three years, allowing us to place an upper limit on the periastron precession rate of 2.9 × 10{sup –3} deg day{sup –1}— the tightest constraint to date on the periastron precession rate of a hot Jupiter. We use the new transit observations to calculate improved estimates for the system properties, including an updated orbital ephemeris. We also use the large number of secondary eclipses to obtain the most stringent limits to date on the orbit-to-orbit variability of an eccentric hot Jupiter and demonstrate the consistency of multiple-epoch Spitzer observations.
Low voltage RF MEMS variable capacitor with linear C-V response
Elshurafa, Amro M.
2012-07-23
An RF MEMS variable capacitor, fabricated in the PolyMUMPS process and tuned electrostatically, possessing a linear capacitance-voltage response is reported. The measured quality factor of the device was 17 at 1GHz, while the tuning range was 1.2:1 and was achieved at an actuation DC voltage of 8V only. Further, the linear regression coefficient was 0.98. The variable capacitor was created such that it has both vertical and horizontal capacitances present. As the top suspended plate moves towards the bottom fixed plate, the vertical capacitance increases whereas the horizontal capacitance decreases simultaneously such that the sum of the two capacitances yields a linear capacitance-voltage relation. © 2012 The Institution of Engineering and Technology.
Thosar, Archana; Patra, Amit; Bhattacharyya, Souvik
2008-07-01
Design of a nonlinear control system for a Variable Air Volume Air Conditioning (VAVAC) plant through feedback linearization is presented in this article. VAVAC systems attempt to reduce building energy consumption while maintaining the primary role of air conditioning. The temperature of the space is maintained at a constant level by establishing a balance between the cooling load generated in the space and the air supply delivered to meet the load. The dynamic model of a VAVAC plant is derived and formulated as a MIMO bilinear system. Feedback linearization is applied for decoupling and linearization of the nonlinear model. Simulation results for a laboratory scale plant are presented to demonstrate the potential of keeping comfort and maintaining energy optimal performance by this methodology. Results obtained with a conventional PI controller and a feedback linearizing controller are compared and the superiority of the proposed approach is clearly established.
Linear solvation energy relationships: "rule of thumb" for estimation of variable values
Hickey, James P.; Passino-Reader, Dora R.
1991-01-01
For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.
Directory of Open Access Journals (Sweden)
Mingqi Xiang
2013-04-01
Full Text Available In this article, we study a class of nonlocal quasilinear parabolic variational inequality involving $p(x$-Laplacian operator and gradient constraint on a bounded domain. Choosing a special penalty functional according to the gradient constraint, we transform the variational inequality to a parabolic equation. By means of Galerkin's approximation method, we obtain the existence of weak solutions for this equation, and then through a priori estimates, we obtain the weak solutions of variational inequality.
Robust best linear estimation for regression analysis using surrogate and instrumental variables.
Wang, C Y
2012-04-01
We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.
A linearly-acting variable-reluctance generator for thermoacoustic engines
International Nuclear Information System (INIS)
Hail, Claudio U.; Knodel, Philip C.; Lang, Jeffrey H.; Brisson, John G.
2015-01-01
Highlights: • A new design for a linear alternator for thermoacoustic power converters is presented. • A theoretical and semi-empirical model of the generator is developed and validated. • The variable-reluctance generator’s performance is experimentally characterized. • Scaling to higher frequency suggests efficient operation with thermoacoustic engines. - Abstract: A crucial element in a thermoacoustic power converter for reliable small-scale power generation applications is an efficient acoustic-to-electric energy converter. In this work, an acoustic-to-electric transducer for application with a back-to-back standing wave thermoacoustic engine, based on a linearly-acting variable-reluctance generator is proposed, built and experimentally tested. Static and dynamic experiments are performed on one side of the generator on a shaker table at 60 Hz with 5 mm peak-to-peak displacement for performance characterization. A theoretical and empirical model of the variable-reluctance generator are presented and validated with experimental data. A frequency scaling based on the empirical model indicates that a maximum power output of 84 W at 78% generator efficiency is feasible at the thermoacoustic engine’s operating frequency of 250 Hz, not considering power electronic losses. This suggests that the linearly-acting variable-reluctance generator can efficiently convert high frequency small amplitude acoustic oscillations to useful electricity and thus enables its integration into a thermoacoustic power converter
Observations on the variability of linear polarization in late-type dwarf stars
Energy Technology Data Exchange (ETDEWEB)
Huovelin, J.; Linnaluoto, S.; Tuominen, I.; Virtanen, H.
1989-04-01
Broadband (UBV) linear polarimetric observations of a sample of late-type (F7-K5) dwarfs are reported. The observations include ten stars and extend over a maximum of 20 nights. Seven stars show significant temporal variability of polarization, which could be interpreted as rotational modulation due to slowly varying magnetic regions. Magnetic intensification in saturated Zeeman sensitive absorption lines is suggested as the dominant effect connecting linear polarization with magnetic activity in the most active single late-type dwarfs, while the wavelength dependence in the less active stars could also be due to a combination of Rayleigh and Thomson scattering.
Noiseless Linear Amplifiers in Entanglement-Based Continuous-Variable Quantum Key Distribution
Directory of Open Access Journals (Sweden)
Yichen Zhang
2015-06-01
Full Text Available We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay. Simulation results show that the noiseless linear amplifiers can improve the performance of these two protocols, in terms of maximal transmission distances, when we consider small amounts of entanglement, as typical in realistic setups.
EXPLORING THE VARIABLE SKY WITH LINEAR. III. CLASSIFICATION OF PERIODIC LIGHT CURVES
Energy Technology Data Exchange (ETDEWEB)
Palaversa, Lovro; Eyer, Laurent; Rimoldini, Lorenzo [Observatoire Astronomique de l' Université de Genève, 51 chemin des Maillettes, CH-1290 Sauverny (Switzerland); Ivezić, Željko; Loebman, Sarah; Hunt-Walker, Nicholas; VanderPlas, Jacob; Westman, David; Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Ruždjak, Domagoj; Sudar, Davor; Božić, Hrvoje [Hvar Observatory, Faculty of Geodesy, Kačićeva 26, 10000 Zagreb (Croatia); Galin, Mario [Faculty of Geodesy, Kačićeva 26, 10000 Zagreb (Croatia); Kroflin, Andrea; Mesarić, Martina; Munk, Petra; Vrbanec, Dijana [Department of Physics, Faculty of Science, University of Zagreb, Bijenička cesta 32, 10000 Zagreb (Croatia); Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, Caltech, Pasadena, CA 91125 (United States); Stuart, J. Scott [Lincoln Laboratory, Massachusetts Institute of Technology, 244 Wood Street, Lexington, MA 02420-9108 (United States); Srdoč, Gregor, E-mail: lovro.palaversa@unige.ch [Saršoni 90, 51216 Viškovo (Croatia); and others
2013-10-01
We describe the construction of a highly reliable sample of ∼7000 optically faint periodic variable stars with light curves obtained by the asteroid survey LINEAR across 10,000 deg{sup 2} of the northern sky. The majority of these variables have not been cataloged yet. The sample flux limit is several magnitudes fainter than most other wide-angle surveys; the photometric errors range from ∼0.03 mag at r = 15 to ∼0.20 mag at r = 18. Light curves include on average 250 data points, collected over about a decade. Using Sloan Digital Sky Survey (SDSS) based photometric recalibration of the LINEAR data for about 25 million objects, we selected ∼200,000 most probable candidate variables with r < 17 and visually confirmed and classified ∼7000 periodic variables using phased light curves. The reliability and uniformity of visual classification across eight human classifiers was calibrated and tested using a catalog of variable stars from the SDSS Stripe 82 region and verified using an unsupervised machine learning approach. The resulting sample of periodic LINEAR variables is dominated by 3900 RR Lyrae stars and 2700 eclipsing binary stars of all subtypes and includes small fractions of relatively rare populations such as asymptotic giant branch stars and SX Phoenicis stars. We discuss the distribution of these mostly uncataloged variables in various diagrams constructed with optical-to-infrared SDSS, Two Micron All Sky Survey, and Wide-field Infrared Survey Explorer photometry, and with LINEAR light-curve features. We find that the combination of light-curve features and colors enables classification schemes much more powerful than when colors or light curves are each used separately. An interesting side result is a robust and precise quantitative description of a strong correlation between the light-curve period and color/spectral type for close and contact eclipsing binary stars (β Lyrae and W UMa): as the color-based spectral type varies from K4 to F5, the
A Non-linear "Inflation-Relative Prices Variability" Relationship: Evidence from Latin America
Mª Ángeles Caraballo Pou; Carlos Dabús; Diego Caramuta
2006-01-01
This paper presents evidence on a non-linear "inflation-relative prices variability" relationship in three Latin American countries with very high inflation experiences: Argentina, Brazil and Peru. More precisely, and in contrast to results found in previous literature for similar countries, we find a non-concave relation at higher inflation regimes, i.e. when inflation rate surpasses certain threshold. This non-concavity is mainly explained by the unexpected component of inflation, which sug...
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Darmon, Nicole; Ferguson, Elaine L; Briend, André
2002-12-01
Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
A search for time variability and its possible regularities in linear polarization of Be stars
International Nuclear Information System (INIS)
Huang, L.; Guo, Z.H.; Hsu, J.C.; Huang, L.
1989-01-01
Linear polarization measurements are presented for 14 Be stars obtained at McDonald Observatory during four observing runs from June to November of 1983. Methods of observation and data reduction are described. Seven of eight program stars which were observed on six or more nights exhibited obvious polarimetric variations on time-scales of days or months. The incidence is estimated as 50% and may be as high as 93%. No connection can be found between polarimetric variability and rapid periodic light or spectroscopic variability for our stars. Ultra-rapid variability on time-scale of minutes was searched for with negative results. In all cases the position angles also show variations indicating that the axis of symmetry of the circumstellar envelope changes its orientation in space. For the Be binary CX Dra the variations in polarization seems to have a period which is just half of the orbital period
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José
2017-03-17
Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.
Nursyahidah, F.; Saputro, B. A.; Rubowo, M. R.
2018-03-01
The aim of this research is to know the students’ understanding of linear equation system in two variables using Ethnomathematics and to acquire learning trajectory of linear equation system in two variables for the second grade of lower secondary school students. This research used methodology of design research that consists of three phases, there are preliminary design, teaching experiment, and retrospective analysis. Subject of this study is 28 second grade students of Sekolah Menengah Pertama (SMP) 37 Semarang. The result of this research shows that the students’ understanding in linear equation system in two variables can be stimulated by using Ethnomathematics in selling buying tradition in Peterongan traditional market in Central Java as a context. All of strategies and model that was applied by students and also their result discussion shows how construction and contribution of students can help them to understand concept of linear equation system in two variables. All the activities that were done by students produce learning trajectory to gain the goal of learning. Each steps of learning trajectory of students have an important role in understanding the concept from informal to the formal level. Learning trajectory using Ethnomathematics that is produced consist of watching video of selling buying activity in Peterongan traditional market to construct linear equation in two variables, determine the solution of linear equation in two variables, construct model of linear equation system in two variables from contextual problem, and solving a contextual problem related to linear equation system in two variables.
Linear ketenimines. Variable structures of C,C-dicyanoketenimines and C,C-bis-sulfonylketenimines.
Finnerty, Justin; Mitschke, Ullrich; Wentrup, Curt
2002-02-22
C,C-dicyanoketenimines 10a-c were generated by flash vacuum thermolysis of ketene N,S-acetals 9a-c or by thermal or photochemical decomposition of alpha-azido-beta-cyanocinnamonitrile 11. In the latter reaction, 3,3-dicyano-2-phenyl-1-azirine 12 is also formed. IR spectroscopy of the keteniminines isolated in Ar matrixes or as neat films, NMR spectroscopy of 10c, and theoretical calculations (B3LYP/6-31G) demonstrate that these ketenimines have variable geometry, being essentially linear along the CCN-R framework in polar media (neat films and solution), but in the gas phase or Ar matrix they are bent, as is usual for ketenimines. Experiments and calculations agree that a single CN substituent as in 13 is not enough to enforce linearity, and sulfonyl groups are less effective that cyano groups in causing linearity. C,C-bis(methylsulfonyl)ketenimines 4-5 and a C-cyano-C-(methylsulfonyl)ketenimine 15 are not linear. The compound p-O2NC6H4N=C=C(COOMe)2 previously reported in the literature is probably somewhat linearized along the CCNR moiety. A computational survey (B3LYP/6-31G) of the inversion barrier at nitrogen indicates that electronegative C-substituents dramatically lower the barrier; this is also true of N-acyl substituents. Increasing polarity causes lower barriers. Although N-alkylbis(methylsulfonyl)ketenimines are not calculated to be linear, the barriers are so low that crystal lattice forces can induce planarity in N-methylbis(methylsulfonyl)ketenimine 3.
Lunt, Mark
2015-07-01
In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Kalidas Das
2018-03-01
Full Text Available The temperament of stream characteristic, heat and mass transfer of MHD forced convective flow over a linearly expanding porous medium has been scrutinized in the progress exploration. The germane possessions of the liquid like viscosity along with thermal conductivity are believed to be variable in nature, directly influenced by the temperature of flow. As soon as gaining the system of leading equations of the stream, Lie symmetric group transformations have been employed to come across the fitting parallel conversions to alter the central PDEs into a suit of ODEs. The renovated system of ODE with appropriate boundary conditions is numerically solved with the assistance of illustrative software MAPLE 17. The consequences of the relevant factors of the system have been exemplified through charts and graphs. An analogous qualified survey has been prepared among present inquiry and subsisting reads and achieved an admirable accord between them. The variable viscosity parameter has more significant effect on nanofluid velocity than regular fluid and temporal profile as well as nanoparticle concentration is also influenced with variable viscosity. Keywords: Nanofluid, Stretching sheet, Variable viscosity, Variable thermal conductivity, Lie symmetry group
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis
2016-09-08
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.
Some Results on facets for linear inequality in 0-1 variables
Directory of Open Access Journals (Sweden)
D. Sashi Bhusan
2010-03-01
Full Text Available The facet of Knapsack ploytope, i.e. convex hull of 0-1 points satisfying a given linear inequality has been presented in this current paper. Such type of facets plays an important role in set covering set partitioning, matroidal-intersection vertex- packing, generalized assignment and other combinatorial problems. Strong covers for facets of Knapsack ploytope has been developed in the first part of the present paper. Generating family of valid cutting planes that satisfy inequality with 0-1 variables through algorithms are the attraction of this paper.
International Nuclear Information System (INIS)
Ricaud, J.M.; Masson, R.; Masson, R.
2009-01-01
The Laplace-Carson transform classically used for homogenization of linear viscoelastic heterogeneous media yields integral formulations of effective behaviours. These are far less convenient than internal variables formulations with respect to computational aspects as well as to theoretical extensions to closely related problems such as ageing viscoelasticity. Noticing that the collocation method is usually adopted to invert the Laplace-Carson transforms, we first remark that this approximation is equivalent to an internal variables formulation which is exact in some specific situations. This result is illustrated for a two-phase composite with phases obeying a compressible Maxwellian behaviour. Next, an incremental formulation allows to extend at each time step the previous general framework to ageing viscoelasticity. Finally, with the help of a creep test of a porous viscoelastic matrix reinforced with elastic inclusions, it is shown that the method yields accurate predictions (comparing to reference results provided by periodic cell finite element computations). (authors)
Linear dynamical modes as new variables for data-driven ENSO forecast
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
A linear stepping endovascular intervention robot with variable stiffness and force sensing.
He, Chengbin; Wang, Shuxin; Zuo, Siyang
2018-03-08
Robotic-assisted endovascular intervention surgery has attracted significant attention and interest in recent years. However, limited designs have focused on the variable stiffness mechanism of the catheter shaft. Flexible catheter needs to be partially switched to a rigid state that can hold its shape against external force to achieve a stable and effective insertion procedure. Furthermore, driving catheter in a similar way with manual procedures has the potential to make full use of the extensive experience from conventional catheter navigation. Besides driving method, force sensing is another significant factor for endovascular intervention. This paper presents a variable stiffness catheterization system that can provide stable and accurate endovascular intervention procedure with a linear stepping mechanism that has a similar operation mode to the conventional catheter navigation. A specially designed shape-memory polymer tube with water cooling structure is used to achieve variable stiffness of the catheter. Hence, four FBG sensors are attached to the catheter tip in order to monitor the tip contact force situation with temperature compensation. Experimental results show that the actuation unit is able to deliver linear and rotational motions. We have shown the feasibility of FBG force sensing to reduce the effect of temperature and detect the tip contact force. The designed catheter can change its stiffness partially, and the stiffness of the catheter can be remarkably increased in rigid state. Hence, in the rigid state, the catheter can hold its shape against a [Formula: see text] load. The prototype has also been validated with a vascular phantom, demonstrating the potential clinical value of the system. The proposed system provides important insights into the design of compact robotic-assisted catheter incorporating effective variable stiffness mechanism and real-time force sensing for intraoperative endovascular intervention.
Degree of multicollinearity and variables involved in linear dependence in additive-dominant models
Directory of Open Access Journals (Sweden)
Juliana Petrini
2012-12-01
Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.
The number of subjects per variable required in linear regression analyses.
Austin, Peter C; Steyerberg, Ewout W
2015-06-01
To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Oscar D. Montoya-Giraldo
2014-01-01
Full Text Available This paper presents the design and simulation of a global controller for the Reaction Wheel Pendulum system using energy regulation and extended linearization methods for the state feedback. The proposed energy regulation is based on the gradual reduction of the energy of the system to reach the unstable equilibrium point. The signal input for this task is obtained from the Lyapunov stability theory. The extended state feedback controller design is used to get a smooth nonlinear function that extends the region of operation to a bigger range, in contrast with the static linear state feedback obtained through the method of approximate linearization around an operating point. The general designed controller operates with a switching between the two control signals depending upon the region of operation; perturbations are applied in the control signal and the (simulated measured variables to verify the robustness and efficiency of the controller. Finally, simulations and tests using the model of the reaction wheel pendulum system, allow to observe the versatility and functionality of the proposed controller in the entire operation region of the pendulum.
Directory of Open Access Journals (Sweden)
Suresh Kumar
2014-10-01
Full Text Available In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann–Robertson–Walker space–time filled with ordinary matter (baryonic, radiation, dark matter and dark energy, where the latter two components are described by Chevallier–Polarski–Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.
International Nuclear Information System (INIS)
Kumar, Suresh; Xu, Lixin
2014-01-01
In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann–Robertson–Walker space–time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier–Polarski–Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch
International Nuclear Information System (INIS)
Blanc, V.; Barbie, L.; Masson, R.
2011-01-01
Homogenization of linear viscoelastic heterogeneous media is here extended from two phase inclusion-matrix media to three phase inclusion-matrix media. Each phase obeying to a compressible Maxwellian behaviour, this analytic method leads to an equivalent elastic homogenization problem in the Laplace-Carson space. For some particular microstructures, such as the Hashin composite sphere assemblage, an exact solution is obtained. The inversion of the Laplace-Carson transforms of the overall stress-strain behaviour gives in such cases an internal variable formulation. As expected, the number of these internal variables and their evolution laws are modified to take into account the third phase. Moreover, evolution laws of averaged stresses and strains per phase can still be derived for three phase media. Results of this model are compared to full fields computations of representative volume elements using finite element method, for various concentrations and sizes of inclusion. Relaxation and creep test cases are performed in order to compare predictions of the effective response. The internal variable formulation is shown to yield accurate prediction in both cases. (authors)
DEFF Research Database (Denmark)
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Energy Technology Data Exchange (ETDEWEB)
Tariq, Hareem E-mail: htariq@ligo.caltech.edu; Takamori, Akiteru; Vetrano, Flavio; Wang Chenyang; Bertolini, Alessandro; Calamai, Giovanni; DeSalvo, Riccardo; Gennai, Alberto; Holloway, Lee; Losurdo, Giovanni; Marka, Szabolcs; Mazzoni, Massimo; Paoletti, Federico; Passuello, Diego; Sannibale, Virginio; Stanga, Ruggero
2002-08-21
Low-power, ultra-high-vacuum compatible, non-contacting position sensors with nanometer resolution and centimeter dynamic range have been developed, built and tested. They have been designed at Virgo as the sensors for low-frequency modal damping of Seismic Attenuation System chains in Gravitational Wave interferometers and sub-micron absolute mirror positioning. One type of these linear variable differential transformers (LVDTs) has been designed to be also insensitive to transversal displacement thus allowing 3D movement of the sensor head while still precisely reading its position along the sensitivity axis. A second LVDT geometry has been designed to measure the displacement of the vertical seismic attenuation filters from their nominal position. Unlike the commercial LVDTs, mostly based on magnetic cores, the LVDTs described here exert no force on the measured structure.
Linear variable differential transformer and its uses for in-core fuel rod behavior measurements
International Nuclear Information System (INIS)
Wolf, J.R.
1979-01-01
The linear variable differential transformer (LVDT) is an electromechanical transducer which produces an ac voltage proportional to the displacement of a movable ferromagnetic core. When the core is connected to the cladding of a nuclear fuel rod, it is capable of producing extremely accurate measurements of fuel rod elongation caused by thermal expansion. The LVDT is used in the Thermal Fuels Behavior Program at the U.S. Idaho National Engineering Laboratory (INEL) for measurements of nuclear fuel rod elongation and as an indication of critical heat flux and the occurrence of departure from nucleate boiling. These types of measurements provide important information about the behavior of nuclear fuel rods under normal and abnormal operating conditions. The objective of the paper is to provide a complete account of recent advances made in LVDT design and experimental data from in-core nuclear reactor tests which use the LVDT
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Directory of Open Access Journals (Sweden)
A. Cabezas
2010-08-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
Angulo, Raul E.; Hilbert, Stefan
2015-03-01
We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.
Are particle rest masses variable: Theory and constraints from solar system experiments
International Nuclear Information System (INIS)
Bekenstein, J.D.
1977-01-01
Particle rest mass variation in spacetime is considered. According to Dicke, if this is the case various null experiments indicate that all masses vary in the same way. Their variation relative to the Planck-Wheeler mass defines a universal scalar rest-mass field. We construct the relativistic dynamics for this field based on very general assumptions. In addition, we assume Einstein's equations to be valid in Planck-Wheeler units. A special case of the theory coincides with Dicke's reformulation of Brans-Dicke theory as general relativity with variable rest masses. In the general case the rest-mass field is some power r of a scalar field which obeys an ordinary scalar equation with coupling to the curvature of strength q. The r and q are the only parameters of the theory. Comparison with experiment is facilitated by recasting the theory into units in which rest masses are constant, the Planck-Wheeler mass varies, and the metric satisfies the equations of a small subset of the scalar-tensor theories of gravitation. The results of solar system experiments, usually used to test general relativity, are here used to delimit the acceptable values of r and q. We conclude that if cosmological considerations are not invoked, then the solar system experiments do not rule out the possibility of rest-mass variability. That is, there are theories which agree with all null and solar system experiments, and yet contradict the strong equivalence principle by allowing rest masses to vary relative to the Planck-Wheeler mass. We show that the field theory of the rest-mass field can be quantized and interpreted in terms of massless scalar quanta which interact very weakly with matter. This explains why they have not turned up in high-energy experiments. In future reports we shall investigate the implications of various cosmological and astrophysical data for the theory of variable rest masses. The ultimate goal is a firm decision on whether rest masses vary or not
Shivokhin, Maksim E.; Read, Daniel J.; Kouloumasis, Dimitris; Kocen, Rok; Zhuge, Flanco; Bailly, Christian; Hadjichristidis, Nikolaos; Likhtman, Alexei E.
2017-01-01
of a linear probe chain. For this purpose we first validate the ability of the model to consistently predict both the viscoelastic and dielectric response of monodisperse and binary mixtures of type A polymers, based on published experimental data. We
Energy Technology Data Exchange (ETDEWEB)
Chaumerliac, V
1995-03-09
Spark-ignition engine control needs substantial improvement for various reasons: a non-linear and multivariable process, the strictness of anti-pollution constraints, the necessity of fuel economy, the variable running conditions, the aging, the reliability and the cost. The improvement of engine efficiency will be involved in this context and with the pollution constraints. This work develops a system approach and its philosophy is based on a suitable description of the main dynamics. A compartmentalized model of a spark-ignition engine and the dynamic of the vehicle is presented. The aim of this modeling is to have a good representativeness in transients and to describe the behavior of the outputs useful for control. The multivariable control is split in two independent systems. The first one controls the spark advance control to obtain the maximum torque. The second one controls the throttle and the electronic fuel injection device to have lower pollutant emissions. The spark advance closed loop control uses information measured with either a cylinder pressure sensor or a torque sensor. These studies have achieved to an adaptive tuning on engine bench. A new actuator, the electronic throttle control, can provide a higher degree of precision for the fuel/air ratio regulation system, particularly during fast accelerations and decelerations. An intake manifold pressure control is developed to coordinate the air and fuel flows. A delay strategy and a simple compensation of fuel supply dynamics allow to obtain good results on engine bench. Uncoupling the acceleration pedal and the throttle command is a promising way to improve engine efficiency and reduce exhaust emission during transient phases. (author) 59 refs.
Beauvais, Anicet; Chardon, Dominique
2010-05-01
After the onset of Gondwana break-up in the Early Mesozoic, the emerged part of the African plate underwent long Greenhouse effect climatic periods and epeirogeny. The last Greenhouse effect period in the Early Cenozoic and the alternation of wet and dry climatic periods since the Eocene enhanced episodes of rock chemical weathering and laterite production, forming bauxites and ferricretes, interrupted by drier periods of dominantly mechanical denudation, shaping glacis [1]. In Sub-Saharan West Africa, this evolution resulted in pulsate and essentially climatically-forced denudation that has shaped an ubiquitous sequence of five stepped lateritic paleosurfaces that synchronously developed over Cenozoic times. The modes, timing and spatial variability of continental denudation of the region are investigated by combining geomorphologic and geochronological data sets. The geomorphologic data set comprises the altitudinal distribution of the lateritic paleosurfaces relicts and their differential elevation from 42 locations in Sub-Saharan West Africa where the sequence (or part of it) has been documented. The geochronological data set consists in the age ranges of each paleosurface tackled by radiometric 39Ar-40Ar dating of the neoformed oxy-hydroxides (i.e., cryptomelane, K1-2Mn8O16, nH2O, [4]) carried by their laterites at the Tambao reference site, Burkina Faso [1, 3]. Five groups of 39Ar-40Ar ages, ~ 59 - 45 Ma, ~ 29 - 24 Ma, ~ 18 - 11.5 Ma, ~ 7.2 - 5.8 Ma, and ~ 3.4 - 2.9 Ma, characterize periods of chemical weathering whereas the time laps between these groups of ages correspond to episodes of mechanical denudation that reflect physical shaping of the paleosurfaces. For the last 45 Ma, the denudation rate estimates (3 to 8 m Ma-1) are comparable with those derived on shorter time scale (103 to 106 y.) in the same region by the cosmogenic radionuclide method [2]. Combined with the geomorphologic data set, these age ranges allow the visualization of the regional
Operational constraints and hydrologic variability limit hydropower in supporting wind integration
International Nuclear Information System (INIS)
Fernandez, Alisha R; Blumsack, Seth A; Reed, Patrick M
2013-01-01
Climate change mitigation will require rapid adoption of low-carbon energy resources. The integration of large-scale wind energy in the United States (US) will require controllable assets to balance the variability of wind energy production. Previous work has identified hydropower as an advantageous asset, due to its flexibility and low-carbon emissions production. While many dams currently provide energy and environmental services in the US and globally, we find that multi-use hydropower facilities would face significant policy conflicts if asked to store and release water to accommodate wind integration. Specifically, we develop a model simulating hydroelectric operational decisions when the electric facility is able to provide wind integration services through a mechanism that we term ‘flex reserves’. We use Kerr Dam in North Carolina as a case study, simulating operations under two alternative reservoir policies, one reflecting current policies and the other regulating flow levels to promote downstream ecosystem conservation. Even under perfect information and significant pricing incentives, Kerr Dam faces operational conflicts when providing any substantial levels of flex reserves while also maintaining releases consistent with other river management requirements. These operational conflicts are severely exacerbated during periods of drought. Increase of payments for flex reserves does not resolve these operational and policy conflicts. (letter)
Relating Linear and Volumetric Variables Through Body Scanning to Improve Human Interfaces in Space
Margerum, Sarah E.; Ferrer, Mike A.; Young, Karen S.; Rajulu, Sudhakar
2010-01-01
Designing space suits and vehicles for the diverse human population present unique challenges for the methods of traditional anthropometry. Space suits are bulky and allow the operator to shift position within the suit and inhibit the ability to identify body landmarks. Limited suit sizing options also cause variability in fit and performance between similarly sized individuals. Space vehicles are restrictive in volume in both the fit and the ability to collect data. NASA's Anthropometric and Biomechanics Facility (ABF) has utilized 3D scanning to shift from traditional linear anthropometry to explore and examine volumetric capabilities to provide anthropometric solutions for design. Overall, the key goals are to improve the human-system performance and develop new processes to aid in the design and evaluation of space systems. Four case studies are presented that illustrate the shift from purely linear analyses to an augmented volumetric toolset to predict and analyze the human within the space suit and vehicle. The first case study involves the calculation of maximal head volume to estimate total free volume in the helmet for proper air exchange. Traditional linear measurements resulted in an inaccurate representation of the head shape, yet limited data exists for the determination of a large head volume. Steps were first taken to identify and classify a maximum head volume and the resulting comparisons to the estimate are presented in this paper. This study illustrates the gap between linear components of anthropometry and the need for overall volume metrics in order to provide solutions. A second case study examines the overlay of the space suit scans and components onto scanned individuals to quantify fit and clearance to aid in sizing the suit to the individual. Restrictions in space suit size availability present unique challenges to optimally fit the individual within a limited sizing range while maintaining performance. Quantification of the clearance and
Hippotherapy acute impact on heart rate variability non-linear dynamics in neurological disorders.
Cabiddu, Ramona; Borghi-Silva, Audrey; Trimer, Renata; Trimer, Vitor; Ricci, Paula Angélica; Italiano Monteiro, Clara; Camargo Magalhães Maniglia, Marcela; Silva Pereira, Ana Maria; Rodrigues das Chagas, Gustavo; Carvalho, Eliane Maria
2016-05-15
Neurological disorders are associated with autonomic dysfunction. Hippotherapy (HT) is a therapy treatment strategy that utilizes a horse in an interdisciplinary approach for the physical and mental rehabilitation of people with physical, mental and/or psychological disabilities. However, no studies have been carried out which evaluated the effects of HT on the autonomic control in these patients. Therefore, the objective of the present study was to investigate the effects of a single HT session on cardiovascular autonomic control by time domain and non-linear analysis of heart rate variability (HRV). The HRV signal was recorded continuously in twelve children affected by neurological disorders during a HT session, consisting in a 10-minute sitting position rest (P1), a 15-minute preparatory phase sitting on the horse (P2), a 15-minute HT session (P3) and a final 10-minute sitting position recovery (P4). Time domain and non-linear HRV indices, including Sample Entropy (SampEn), Lempel-Ziv Complexity (LZC) and Detrended Fluctuation Analysis (DFA), were calculated for each treatment phase. We observed that SampEn increased during P3 (SampEn=0.56±0.10) with respect to P1 (SampEn=0.40±0.14, p<0.05), while DFA decreased during P3 (DFA=1.10±0.10) with respect to P1 (DFA=1.26±0.14, p<0.05). A significant SDRR increase (p<0.05) was observed during the recovery period P4 (SDRR=50±30ms) with respect to the HT session period P3 (SDRR=30±10ms). Our results suggest that HT might benefit children with disabilities attributable to neurological disorders by eliciting an acute autonomic response during the therapy and during the recovery period. Copyright © 2016 Elsevier Inc. All rights reserved.
Significant and variable linear polarization during the prompt optical flash of GRB 160625B.
Troja, E.; Lipunov, V. M.; Mundell, C. G.; Butler, N. R.; Watson, A. M.; Kobayashi, S.; Cenko, S. B.; Marshall, F. E.; Ricci, R.; Fruchter, A.; Wieringa, M. H.; Gorbovskoy, E. S.; Kornilov, V.; Kutyrev, A.; Lee, W. H.; Toy, V.; Tyurina, N. V.; Budnev, N. M.; Buckley, D. A. H.; González, J.; Gress, O.; Horesh, A.; Panasyuk, M. I.; Prochaska, J. X.; Ramirez-Ruiz, E.; Rebolo Lopez, R.; Richer, M. G.; Roman-Zuniga, C.; Serra-Ricart, M.; Yurkov, V.; Gehrels, N.
2017-07-01
Newly formed black holes of stellar mass launch collimated outflows (jets) of ionized matter that approach the speed of light. These outflows power prompt, brief and intense flashes of γ-rays known as γ-ray bursts (GRBs), followed by longer-lived afterglow radiation that is detected across the electromagnetic spectrum. Measuring the polarization of the observed GRB radiation provides a direct probe of the magnetic fields in the collimated jets. Rapid-response polarimetric observations of newly discovered bursts have probed the initial afterglow phase, and show that, minutes after the prompt emission has ended, the degree of linear polarization can be as high as 30 per cent - consistent with the idea that a stable, globally ordered magnetic field permeates the jet at large distances from the central source. By contrast, optical and γ-ray observations during the prompt phase have led to discordant and often controversial results, and no definitive conclusions have been reached regarding the origin of the prompt radiation or the configuration of the magnetic field. Here we report the detection of substantial (8.3 ± 0.8 per cent from our most conservative simulation), variable linear polarization of a prompt optical flash that accompanied the extremely energetic and long-lived prompt γ-ray emission from GRB 160625B. Our measurements probe the structure of the magnetic field at an early stage of the jet, closer to its central black hole, and show that the prompt phase is produced via fast-cooling synchrotron radiation in a large-scale magnetic field that is advected from the black hole and distorted by dissipation processes within the jet.
Variability of in vivo linear microcrack accumulation in the cortex of elderly human ribs
Directory of Open Access Journals (Sweden)
Amanda M. Agnew
2017-06-01
Full Text Available Excessive accumulation of microdamage in the skeleton in vivo is believed to contribute to fragility and risk of fracture, particularly in the elderly. Current knowledge of how much in vivo damage accrual varies between individuals, if at all, is lacking. In this study, paired sixth ribs from five male and five female elderly individuals (76–92 years, mean age = 84.7 years were examined using en bloc staining and fluorescent microcopy to quantify linear microcracks present at the time of death (i.e. in vivo microdamage. Crack number, crack length, crack density, and crack surface density were measured for each complete cross-section, with densities calculated using the variable of bone area (which accounts for the influence of porosity on the cortex, unlike the more frequently used cortical area, and analyzed using a two-way mixed model analysis of variance. Results indicate that while microcracks between individuals differ significantly, differences between the left and right corresponding pairs within individuals and the pleural and cutaneous cortices within each rib did not. These results suggest that systemic influences, such as differential metabolic activity, affect the accumulation of linear microcracks. Furthermore, variation in remodeling rates between individuals may be a major factor contributing to differential fracture risk in the elderly. Future work should expand to include a wider age range to examine differences in in vivo microdamage accumulation across the lifespan, as well as considering the influence of bisphosphonates on microdamage accumulation in the context of compromised remodeling rates in the elderly.
High Altitude Affects Nocturnal Non-linear Heart Rate Variability: PATCH-HA Study
Directory of Open Access Journals (Sweden)
Christopher J. Boos
2018-04-01
Full Text Available Background: High altitude (HA exposure can lead to changes in resting heart rate variability (HRV, which may be linked to acute mountain sickness (AMS development. Compared with traditional HRV measures, non-linear HRV appears to offer incremental and prognostic data, yet its utility and relationship to AMS have been barely examined at HA. This study sought to examine this relationship at terrestrial HA.Methods: Sixteen healthy British military servicemen were studied at baseline (800 m, first night and over eight consecutive nights, at a sleeping altitude of up to 3600 m. A disposable cardiac patch monitor was used, to record the nocturnal cardiac inter-beat interval data, over 1 h (0200–0300 h, for offline HRV assessment. Non-linear HRV measures included Sample entropy (SampEn, the short (α1, 4–12 beats and long-term (α2, 13–64 beats detrend fluctuation analysis slope and the correlation dimension (D2. The maximal rating of perceived exertion (RPE, during daily exercise, was assessed using the Borg 6–20 RPE scale.Results: All subjects completed the HA exposure. The average age of included subjects was 31.4 ± 8.1 years. HA led to a significant fall in SpO2 and increase in heart rate, LLS and RPE. There were no significant changes in the ECG-derived respiratory rate or in any of the time domain measures of HRV during sleep. The only notable changes in frequency domain measures of HRV were an increase in LF and fall in HFnu power at the highest altitude. Conversely, SampEn, SD1/SD2 and D2 all fell, whereas α1 and α2 increased (p < 0.05. RPE inversely correlated with SD1/SD2 (r = -0.31; p = 0.002, SampEn (r = -0.22; p = 0.03, HFnu (r = -0.27; p = 0.007 and positively correlated with LF (r = 0.24; p = 0.02, LF/HF (r = 0.24; p = 0.02, α1 (r = 0.32; p = 0.002 and α2 (r = 0.21; p = 0.04. AMS occurred in 7/16 subjects (43.8% and was very mild in 85.7% of cases. HRV failed to predict AMS.Conclusion: Non-linear HRV is more sensitive to the
Non-linear indices of heart rate variability during endodontic treatment.
Santana, Milana Drumond Ramos; Pita Neto, Ivo Cavalcante; Martiniano, Eli Carlos; Monteiro, Larissa Raylane Lucas; Ramos, José Lucas Souza; Garner, David M; Valenti, Vitor Engácia; Abreu, Luiz Carlos de
2016-01-01
Dental treatment promotes psychosomatic change that can influence the procedure and compromise the general well-being of the patient. In this context, it highlights the importance of evaluating the function of the autonomic nervous system in individuals undergoing endodontic treatment. Thus, this manuscript aimed to analyse cardiac autonomic modulation, through non-linear indices of heart rate variability (HRV) during endodontic treatment. Analysis of 50 subjects of either sex aged between 18 and 40 years diagnosed with irreversible pulp necrosis of lower molars undergoing endodontic treatment was undertaken. We carried out fractal and symbolic analysis of HRV, which was recorded in the first session of the endodontic treatment at four intervals: T1: 0-10 min before the onset of the treatment session; T2: 0-10 min after the application of anaesthesia; T3: throughout the period of treatment; and T4: 0-30 min after the end of the treatment session. There was reduction of α1 in T2 compared to T1 and T4 (p endodontic treatment, and after applying local anaesthetic the parasympathetic component of HRV increases. These data indicate that endodontic treatment acutely overcharges the heart, supporting the stress involved in this situation.
Basic design of radiation-resistant LVDTs: Linear Variable Differential Transformer
Energy Technology Data Exchange (ETDEWEB)
Sohn, J. M.; Park, S. J.; Kang, Y. H. (and others)
2008-02-15
A LVDT(Linear Variable Differential Transformer) for measuring the pressure level was used to measure the pressure of a nuclear fuel rod during the neutron irradiation test in a research reactor. A LVDT for measuring the elongation was also used to measure the elongation of nuclear fuels, and the creep and fatigue of materials during a neutron irradiation test in a research reactor. In this report, the basic design of two radiation-resistant LVDTs for measuring the pressure level and elongation are described. These LVDTs are used a under radiation environment such as a research reactor. In the basic design step, we analyzed the domestic and foreign technical status for radiation-resistant LVDTs, made part and assembly drawings and established simple procedures for their assembling. Only a few companies in the world can produce radiation-resistant LVDTs. Not only these are extremely expensive, but the prices are continuously rising. Also, it takes a long time to procure a LVDT, as it can only be bought about by an order-production. The localization of radiation-resistant LVDTs is necessary in order to provide them quickly and at a low cost. These radiation-resistant LVDTs will be used at neutron irradiation devices such as instrumented fuel capsules, special purpose capsules and a fuel test loop in research reactors. We expect that the use of neutron irradiation tests will be revitalized by the localization of radiation-resistant LVDTs.
Sukarno; Law, Cheryl Suwen; Santos, Abel
2017-06-08
We present the first realisation of linear variable bandpass filters in nanoporous anodic alumina (NAA-LVBPFs) photonic crystal structures. NAA gradient-index filters (NAA-GIFs) are produced by sinusoidal pulse anodisation and used as photonic crystal platforms to generate NAA-LVBPFs. The anodisation period of NAA-GIFs is modified from 650 to 850 s to systematically tune the characteristic photonic stopband of these photonic crystals across the UV-visible-NIR spectrum. Then, the nanoporous structure of NAA-GIFs is gradually widened along the surface under controlled conditions by wet chemical etching using a dip coating approach aiming to create NAA-LVBPFs with finely engineered optical properties. We demonstrate that the characteristic photonic stopband and the iridescent interferometric colour displayed by these photonic crystals can be tuned with precision across the surface of NAA-LVBPFs by adjusting the fabrication and etching conditions. Here, we envisage for the first time the combination of the anodisation period and etching conditions as a cost-competitive, facile, and versatile nanofabrication approach that enables the generation of a broad range of unique LVBPFs covering the spectral regions. These photonic crystal structures open new opportunities for multiple applications, including adaptive optics, hyperspectral imaging, fluorescence diagnostics, spectroscopy, and sensing.
Directory of Open Access Journals (Sweden)
Alexander W. Koch
2013-09-01
Full Text Available This paper presents a low-cost hyperspectral measurement setup in a new application based on fluorescence detection in the visible (Vis wavelength range. The aim of the setup is to take hyperspectral fluorescence images of viscous materials. Based on these images, fluorescent and non-fluorescent impurities in the viscous materials can be detected. For the illumination of the measurement object, a narrow-band high-power light-emitting diode (LED with a center wavelength of 370 nm was used. The low-cost acquisition unit for the imaging consists of a linear variable filter (LVF and a complementary metal oxide semiconductor (CMOS 2D sensor array. The translucent wavelength range of the LVF is from 400 nm to 700 nm. For the confirmation of the concept, static measurements of fluorescent viscous materials with a non-fluorescent impurity have been performed and analyzed. With the presented setup, measurement surfaces in the micrometer range can be provided. The measureable minimum particle size of the impurities is in the nanometer range. The recording rate for the measurements depends on the exposure time of the used CMOS 2D sensor array and has been found to be in the microsecond range.
Kim, T. W.; Park, G. H.
2014-12-01
Seasonal variation of aragonite saturation state (Ωarag) in the North Pacific Ocean (NPO) was investigated, using multiple linear regression (MLR) models produced from the PACIFICA (Pacific Ocean interior carbon) dataset. Data within depth ranges of 50-1200m were used to derive MLR models, and three parameters (potential temperature, nitrate, and apparent oxygen utilization (AOU)) were chosen as predictor variables because these parameters are associated with vertical mixing, DIC (dissolved inorganic carbon) removal and release which all affect Ωarag in water column directly or indirectly. The PACIFICA dataset was divided into 5° × 5° grids, and a MLR model was produced in each grid, giving total 145 independent MLR models over the NPO. Mean RMSE (root mean square error) and r2 (coefficient of determination) of all derived MLR models were approximately 0.09 and 0.96, respectively. Then the obtained MLR coefficients for each of predictor variables and an intercept were interpolated over the study area, thereby making possible to allocate MLR coefficients to data-sparse ocean regions. Predictability from the interpolated coefficients was evaluated using Hawaiian time-series data, and as a result mean residual between measured and predicted Ωarag values was approximately 0.08, which is less than the mean RMSE of our MLR models. The interpolated MLR coefficients were combined with seasonal climatology of World Ocean Atlas 2013 (1° × 1°) to produce seasonal Ωarag distributions over various depths. Large seasonal variability in Ωarag was manifested in the mid-latitude Western NPO (24-40°N, 130-180°E) and low-latitude Eastern NPO (0-12°N, 115-150°W). In the Western NPO, seasonal fluctuations of water column stratification appeared to be responsible for the seasonal variation in Ωarag (~ 0.5 at 50 m) because it closely followed temperature variations in a layer of 0-75 m. In contrast, remineralization of organic matter was the main cause for the seasonal
International Nuclear Information System (INIS)
Agnor, Craig B.; Lin, D. N. C.
2012-01-01
We examine how the late divergent migration of Jupiter and Saturn may have perturbed the terrestrial planets. Using a modified secular model we have identified six secular resonances between the ν 5 frequency of Jupiter and Saturn and the four apsidal eigenfrequencies of the terrestrial planets (g 1-4 ). We derive analytic upper limits on the eccentricity and orbital migration timescale of Jupiter and Saturn when these resonances were encountered to avoid perturbing the eccentricities of the terrestrial planets to values larger than the observed ones. Because of the small amplitudes of the j = 2, 3 terrestrial eigenmodes the g 2 – ν 5 and g 3 – ν 5 resonances provide the strongest constraints on giant planet migration. If Jupiter and Saturn migrated with eccentricities comparable to their present-day values, smooth migration with exponential timescales characteristic of planetesimal-driven migration (τ ∼ 5-10 Myr) would have perturbed the eccentricities of the terrestrial planets to values greatly exceeding the observed ones. This excitation may be mitigated if the eccentricity of Jupiter was small during the migration epoch, migration was very rapid (e.g., τ ∼< 0.5 Myr perhaps via planet-planet scattering or instability-driven migration) or the observed small eccentricity amplitudes of the j = 2, 3 terrestrial modes result from low probability cancellation of several large amplitude contributions. Results of orbital integrations show that very short migration timescales (τ < 0.5 Myr), characteristic of instability-driven migration, may also perturb the terrestrial planets' eccentricities by amounts comparable to their observed values. We discuss the implications of these constraints for the relative timing of terrestrial planet formation, giant planet migration, and the origin of the so-called Late Heavy Bombardment of the Moon 3.9 ± 0.1 Ga ago. We suggest that the simplest way to satisfy these dynamical constraints may be for the bulk of any giant
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
Chen, Hui-Ya; Wing, Alan M; Pratt, David
2006-04-01
Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.
Directory of Open Access Journals (Sweden)
A. Aminataei
2014-05-01
Full Text Available In this paper, a new and ecient approach is applied for numerical approximation of the linear dierential equations with variable coecients based on operational matrices with respect to Hermite polynomials. Explicit formulae which express the Hermite expansioncoecients for the moments of derivatives of any dierentiable function in terms of the original expansion coecients of the function itself are given in the matrix form. The mainimportance of this scheme is that using this approach reduces solving the linear dierentialequations to solve a system of linear algebraic equations, thus greatly simplifying the problem. In addition, two experiments are given to demonstrate the validity and applicability of the method
Non-linear properties of R-R distributions as a measure of heart rate variability
International Nuclear Information System (INIS)
Irurzun, I.M.; Bergero, P.; Cordero, M.C.; Defeo, M.M.; Vicente, J.L.; Mola, E.E.
2003-01-01
We analyze the dynamic quality of the R-R interbeat intervals of electrocardiographic signals from healthy people and from patients with premature ventricular contractions (PVCs) by applying different measure algorithms to standardised public domain data sets of heart rate variability. Our aim is to assess the utility of these algorithms for the above mentioned purposes. Long and short time series, 24 and 0.50 h respectively, of interbeat intervals of healthy and PVC subjects were compared with the aim of developing a fast method to investigate their temporal organization. Two different methods were used: power spectral analysis and the integral correlation method. Power spectral analysis has proven to be a powerful tool for detecting long-range correlations. If it is applied in a short time series, power spectra of healthy and PVC subjects show a similar behavior, which disqualifies power spectral analysis as a fast method to distinguish healthy from PVC subjects. The integral correlation method allows us to study the fractal properties of interbeat intervals of electrocardiographic signals. The cardiac activity of healthy and PVC people stems from dynamics of chaotic nature characterized by correlation dimensions d f equal to 3.40±0.50 and 5.00±0.80 for healthy and PVC subjects respectively. The methodology presented in this article bridges the gap between theoretical and experimental studies of non-linear phenomena. From our results we conclude that the minimum number of coupled differential equations to describe cardiac activity must be six and seven for healthy and PVC individuals respectively. From the present analysis we conclude that the correlation integral method is particularly suitable, in comparison with the power spectral analysis, for the early detection of arrhythmias on short time (0.5 h) series
Abramov, R. V.
2011-12-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.
International Nuclear Information System (INIS)
Zhang Yunong; Li Zhan
2009-01-01
In this Letter, by following Zhang et al.'s method, a recurrent neural network (termed as Zhang neural network, ZNN) is developed and analyzed for solving online the time-varying convex quadratic-programming problem subject to time-varying linear-equality constraints. Different from conventional gradient-based neural networks (GNN), such a ZNN model makes full use of the time-derivative information of time-varying coefficient. The resultant ZNN model is theoretically proved to have global exponential convergence to the time-varying theoretical optimal solution of the investigated time-varying convex quadratic program. Computer-simulation results further substantiate the effectiveness, efficiency and novelty of such ZNN model and method.
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n
2016-01-01
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons
Warren, Dana R.; Dunham, Jason B.; Hockman-Wert, David
2014-01-01
Understanding local and geographic factors influencing species distributions is a prerequisite for conservation planning. Our objective in this study was to model local and geographic variability in elevations occupied by native and nonnative trout in the northwestern Great Basin, USA. To this end, we analyzed a large existing data set of trout presence (5,156 observations) to evaluate two fundamental factors influencing occupied elevations: climate-related gradients in geography and local constraints imposed by topography. We applied quantile regression to model upstream and downstream distribution elevation limits for each trout species commonly found in the region (two native and two nonnative species). With these models in hand, we simulated an upstream shift in elevation limits of trout distributions to evaluate potential consequences of habitat loss. Downstream elevation limits were inversely associated with latitude, reflecting regional gradients in temperature. Upstream limits were positively related to maximum stream elevation as expected. Downstream elevation limits were constrained topographically by valley bottom elevations in northern streams but not in southern streams, where limits began well above valley bottoms. Elevation limits were similar among species. Upstream shifts in elevation limits for trout would lead to more habitat loss in the north than in the south, a result attributable to differences in topography. Because downstream distributions of trout in the north extend into valley bottoms with reduced topographic relief, trout in more northerly latitudes are more likely to experience habitat loss associated with an upstream shift in lower elevation limits. By applying quantile regression to relatively simple information (species presence, elevation, geography, topography), we were able to identify elevation limits for trout in the Great Basin and explore the effects of potential shifts in these limits that could occur in response to changing
Symmetry of the homogeneous linear partial differential equations and seperation of variables
International Nuclear Information System (INIS)
Gegelia, D.T.; Markovski, B.L.
1990-01-01
The general interplay between dynamical symmetry of LPDE and the problem of variables splitting is analyzed. The existence of symmetry is only a necessary condition for separation of variables. The necessary and sufficient conditions for two-dimensional second-order LPDE are explicitly found in an appropriate coordinate system. The proposed construction can be straight forwardly extended for higher dimensions too. 8 refs
International Nuclear Information System (INIS)
Knudson, D.L.; Rempe, J.L.; Daw, J.E.
2009-01-01
The United States (U.S.) Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF) in April 2007 to promote nuclear science and technology in the U.S. Given this designation, the ATR is supporting new users from universities, laboratories, and industry as they conduct basic and applied nuclear research and development to advance the nation's energy security needs. A fundamental component of the ATR NSUF program is to develop in-pile instrumentation capable of providing real-time measurements of key parameters during irradiation experiments. Dimensional change is a key parameter that must be monitored during irradiation of new materials being considered for fuel, cladding, and structures in next generation and existing nuclear reactors. Such materials can experience significant changes during high temperature irradiation. Currently, dimensional changes are determined by repeatedly irradiating a specimen for a defined period of time in the ATR and then removing it from the reactor for evaluation. The time and labor to remove, examine, and return irradiated samples for each measurement makes this approach very expensive. In addition, such techniques provide limited data (i.e., only characterizing the end state when samples are removed from the reactor) and may disturb the phenomena of interest. To address these issues, the Idaho National Laboratory (INL) recently initiated efforts to evaluate candidate linear variable displacement transducers (LVDTs) for use during high temperature irradiation experiments in typical ATR test locations. Two nuclear grade LVDT vendor designs were identified for consideration - a smaller diameter design qualified for temperatures up to 350 C and a larger design with capabilities to 500 C. Initial evaluation efforts include collecting calibration data as a function of temperature, long duration testing of LVDT response while held at high temperature, and the assessment of changes
Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model
Energy Technology Data Exchange (ETDEWEB)
Kwak, Jae Sik; Kim, Hee Reyoung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)
2015-05-15
The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP.
Linear and Weakly Nonlinear Instability of Shallow Mixing Layers with Variable Friction
Directory of Open Access Journals (Sweden)
Irina Eglite
2018-01-01
Full Text Available Linear and weakly nonlinear instability of shallow mixing layers is analysed in the present paper. It is assumed that the resistance force varies in the transverse direction. Linear stability problem is solved numerically using collocation method. It is shown that the increase in the ratio of the friction coefficients in the main channel to that in the floodplain has a stabilizing influence on the flow. The amplitude evolution equation for the most unstable mode (the complex Ginzburg–Landau equation is derived from the shallow water equations under the rigid-lid assumption. Results of numerical calculations are presented.
Directory of Open Access Journals (Sweden)
MENKA PETKOVSKA
2000-12-01
Full Text Available The concept of higher order frequency response functions (FRFs is used for the analysis of non-linear adsorption kinetics on a particle scale, for the case of non-isothermal micropore diffusion with variable diffusivity. Six series of FRFs are defined for the general non-isothermal case. A non-linerar mathematical model is postulated and the first and second order FRFs derived and simulated. A variable diffusivity influences the shapes of the second order FRFs relating the sorbate concentration in the solid phase and t he gas pressure significantly, but they still keep their characteristics which can be used for discrimination of this from other kinetic mechanisms. It is also shown that first and second order particle FRFs offter sufficient information for an easy and fast estimation of all model parameters, including those defining the system non-linearity.
International Nuclear Information System (INIS)
Chen, Haixia; Zhang, Jing
2007-01-01
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme loses the output of phase-conjugate clones and is regarded as irreversible quantum cloning
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Effect of milk proteins on linear growth and IGF variables in overweight adolescents
DEFF Research Database (Denmark)
Larnkjær, Anni; Arnberg, Karina; Michaelsen, Kim F
2014-01-01
Milk may stimulate growth acting via insulin-like growth factor-I (IGF-I) secretion but the effect in adolescents is less examined. This study investigates the effect of milk proteins on linear growth, IGF-I, IGF binding protein-3 (IGFBP-3) and IGF-I/IGFBP-3 ratio in overweight adolescents....
Abramov, Rafail V.
2011-01-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation prop...
International Nuclear Information System (INIS)
Christiansen, E.L.; Thompson, J.R.; Kopp, S.
1986-01-01
The observer variability and accuracy of linear and angular computed tomography (CT) software measurements in the transaxial plane were investigated for the temporomandibular joint with the General Electric 8800 CT/N Scanner. A dried and measured human mandible was embedded in plastic and scanned in vitro. Sixteen observers participated in the study. The following measurements were tested: inter- and extra-condylar distances, transverse condylar dimension, condylar angulation, and the plastic base of the specimen. Three frozen cadaveric heads were similarly scanned and measured in situ. Intra- and inter-observer variabilities were lowest for the specimen base and highest for condylar angulation. Neuroradiologists had the lowest variability as a group, and the radiology residents and paramedical personell had the highest, but the differences were small. No significant difference was found between CT and macroscopic measurement of the mandible. In situ measurement by CT of condyles with structural changes in the transaxial plane was, however, subject to substantial error. It was concluded that transaxial linear measurements of the condylar processes free of significant structural changes had an error and an accuracy well within acceptable limits. The error for angular measurements was significantly greater than the error for linear measurements
Development of a new linearly variable edge filter (LVEF)-based compact slit-less mini-spectrometer
Mahmoud, Khaled; Park, Seongchong; Lee, Dong-Hoon
2018-02-01
This paper presents the development of a compact charge-coupled detector (CCD) spectrometer. We describe the design, concept and characterization of VNIR linear variable edge filter (LVEF)- based mini-spectrometer. The new instrument has been realized for operation in the 300 nm to 850 nm wavelength range. The instrument consists of a linear variable edge filter in front of CCD array. Low-size, light-weight and low-cost could be achieved using the linearly variable filters with no need to use any moving parts for wavelength selection as in the case of commercial spectrometers available in the market. This overview discusses the main components characteristics, the main concept with the main advantages and limitations reported. Experimental characteristics of the LVEFs are described. The mathematical approach to get the position-dependent slit function of the presented prototype spectrometer and its numerical de-convolution solution for a spectrum reconstruction is described. The performance of our prototype instrument is demonstrated by measuring the spectrum of a reference light source.
A linear-encoding model explains the variability of the target morphology in regeneration
Lobo, Daniel; Solano, Mauricio; Bubenik, George A.; Levin, Michael
2014-01-01
A fundamental assumption of today's molecular genetics paradigm is that complex morphology emerges from the combined activity of low-level processes involving proteins and nucleic acids. An inherent characteristic of such nonlinear encodings is the difficulty of creating the genetic and epigenetic information that will produce a given self-assembling complex morphology. This ‘inverse problem’ is vital not only for understanding the evolution, development and regeneration of bodyplans, but also for synthetic biology efforts that seek to engineer biological shapes. Importantly, the regenerative mechanisms in deer antlers, planarian worms and fiddler crabs can solve an inverse problem: their target morphology can be altered specifically and stably by injuries in particular locations. Here, we discuss the class of models that use pre-specified morphological goal states and propose the existence of a linear encoding of the target morphology, making the inverse problem easy for these organisms to solve. Indeed, many model organisms such as Drosophila, hydra and Xenopus also develop according to nonlinear encodings producing linear encodings of their final morphologies. We propose the development of testable models of regeneration regulation that combine emergence with a top-down specification of shape by linear encodings of target morphology, driving transformative applications in biomedicine and synthetic bioengineering. PMID:24402915
A critical oscillation constant as a variable of time scales for half-linear dynamic equations
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel
2010-01-01
Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
A High-Linearity Low-Noise Amplifier with Variable Bandwidth for Neural Recoding Systems
Yoshida, Takeshi; Sueishi, Katsuya; Iwata, Atsushi; Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi
2011-04-01
This paper describes a low-noise amplifier with multiple adjustable parameters for neural recording applications. An adjustable pseudo-resistor implemented by cascade metal-oxide-silicon field-effect transistors (MOSFETs) is proposed to achieve low-signal distortion and wide variable bandwidth range. The amplifier has been implemented in 0.18 µm standard complementary metal-oxide-semiconductor (CMOS) process and occupies 0.09 mm2 on chip. The amplifier achieved a selectable voltage gain of 28 and 40 dB, variable bandwidth from 0.04 to 2.6 Hz, total harmonic distortion (THD) of 0.2% with 200 mV output swing, input referred noise of 2.5 µVrms over 0.1-100 Hz and 18.7 µW power consumption at a supply voltage of 1.8 V.
Directory of Open Access Journals (Sweden)
Olga Yu. Aleshkina
2017-05-01
Results and Conclusion ― The highest altitude was marked at levels of incisors and 3rd molar, the smallest one – at level of 1st and 2nd molars; maximum mandible thickness was defined at level of 2nd molar, minimum – at levels of canine and 1st – 2nd premolars on both sides of mandible; average thickness was revealed at levels of incisors, 1st and 2nd molars and had the same statistical values. Bilateral variability of thickness was significantly dominating on the right side and only at levels of 1st – 2nd premolars and 1st molar. Average values of altitude and thickness from both sides of mandible and at all levels had medium degree of variability.
Multivariate linear regression of high-dimensional fMRI data with multiple target variables.
Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia
2014-05-01
Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
DEFF Research Database (Denmark)
Hansen, Niels Chr.; Sadakata, Makiko; Pearce, Marcus
It is a long-held belief in historical musicology that the prosody of composers’ native languages is reflected in the rhythmic and melodic properties of their music. Applying the normalised Pairwise Variability Index (nPVI) to speech alongside musical scores, research has established quantitative...... music up until the mid-19th century, after which French music diverged into an Austro-German school and a French nationalist school. In sum, using musical nPVI analysis, we provide quantitative support for music-historical descriptions of an Italian-dominated Baroque (composer birth years: 1600...
Quantization of spin-two field in terms of Fierz variables the linear case
International Nuclear Information System (INIS)
Novello, M.; Freitas, L.R. de; Neto, N.P.; Svaiter, N.F.
1991-01-01
We give a complete self-contained presentation of the description of spin-two fields using Fierz variables A sub(α β μ) instead of the conventional standard approach which deals with second order symmetric tensor φ sub(μ ν). After a short review of the classical properties of the Gierz field we present the quantization procedure. The theory presents a striking similitude with electrodynamics which induced us to follow analogy with the Fermi-Gupta-Breuler scheme of quantization. (author)
Graphical constraints: a graphical user interface for constraint problems
Vieira, Nelson Manuel Marques
2015-01-01
A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embrace...
International Nuclear Information System (INIS)
Hernandez-Walls, R; Martín-Atienza, B; Salinas-Matus, M; Castillo, J
2017-01-01
When solving the linear inviscid shallow water equations with variable depth in one dimension using finite differences, a tridiagonal system of equations must be solved. Here we present an approach, which is more efficient than the commonly used numerical method, to solve this tridiagonal system of equations using a recursion formula. We illustrate this approach with an example in which we solve for a rectangular channel to find the resonance modes. Our numerical solution agrees very well with the analytical solution. This new method is easy to use and understand by undergraduate students, so it can be implemented in undergraduate courses such as Numerical Methods, Lineal Algebra or Differential Equations. (paper)
Hernandez-Walls, R.; Martín-Atienza, B.; Salinas-Matus, M.; Castillo, J.
2017-11-01
When solving the linear inviscid shallow water equations with variable depth in one dimension using finite differences, a tridiagonal system of equations must be solved. Here we present an approach, which is more efficient than the commonly used numerical method, to solve this tridiagonal system of equations using a recursion formula. We illustrate this approach with an example in which we solve for a rectangular channel to find the resonance modes. Our numerical solution agrees very well with the analytical solution. This new method is easy to use and understand by undergraduate students, so it can be implemented in undergraduate courses such as Numerical Methods, Lineal Algebra or Differential Equations.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
The structure of solutions of the matrix linear unilateral polynomial equation with two variables
Directory of Open Access Journals (Sweden)
N. S. Dzhaliuk
2017-07-01
Full Text Available We investigate the structure of solutions of the matrix linear polynomial equation $A(\\lambdaX(\\lambda+B(\\lambdaY(\\lambda=C(\\lambda,$ in particular, possible degrees of the solutions. The solving of this equation is reduced to the solving of the equivalent matrix polynomial equation with matrix coefficients in triangular forms with invariant factors on the main diagonals, to which the matrices $A (\\lambda, B(\\lambda$ \\ and \\ $C(\\lambda$ are reduced by means of semiscalar equivalent transformations. On the basis of it, we have pointed out the bounds of the degrees of the matrix polynomial equation solutions. Necessary and sufficient conditions for the uniqueness of a solution with a minimal degree are established. An effective method for constructing minimal degree solutions of the equations is suggested. In this article, unlike well-known results about the estimations of the degrees of the solutions of the matrix polynomial equations in which both matrix coefficients are regular or at least one of them is regular, we have considered the case when the matrix polynomial equation has arbitrary matrix coefficients $A(\\lambda$ and $B(\\lambda.$
Santos, Carlos; Espinosa, Felipe; Santiso, Enrique; Mazo, Manuel
2015-05-27
One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.
Directory of Open Access Journals (Sweden)
Carlos Santos
2015-05-01
Full Text Available One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.
Origins of Total-Dose Response Variability in Linear Bipolar Microcircuits
International Nuclear Information System (INIS)
Barnaby, H.J.; Cirba, C.R.; Schrimpf, R.D.; Fleetwood, D.M.; Pease, R.L.; Shaneyfelt, Marty R.; Turflinger, T.; Krieg, J.F.; Maher, M.C.
2000-01-01
LM1ll voltage comparators exhibit a wide range of total-dose-induced degradation. Simulations show this variability may be a natural consequence of the low base doping of the substrate PNP (SPNP) input transistors. Low base doping increases the SPNP's collector to base breakdown voltage, current gain, and sensitivity to small fluctuations in the radiation-induced oxide defect densities. The build-up of oxide trapped charge (N ot ) and interface traps (N it ) is shown to be a function of pre-irradiation bakes. Experimental data indicate that, despite its structural similarities to the LM111, irradiated input transistors of the LM124 operational amplifier do not exhibit the same sensitivity to variations in pre-irradiation thermal cycles. Further disparities in LM111 and LM124 responses may result from a difference in the oxide defect build-up in the two part types. Variations in processing, packaging, and circuit effects are suggested as potential explanations
A 7MeV S-Band 2998MHz Variable Pulse Length Linear Accelerator System
Hernandez, Michael; Mishin, Andrey V; Saverskiy, Aleksandr J; Skowbo, Dave; Smith, Richard
2005-01-01
American Science and Engineering High Energy Systems Division (AS&E HESD) has designed and commissioned a variable pulse length 7 MeV electron accelerator system. The system is capable of delivering a 7 MeV electron beam with a pulse length of 10 nS FWHM and a peak current of 1 ampere. The system can also produce electron pulses with lengths of 20, 50, 100, 200, 400 nS and 3 uS FWHM with corresponding lower peak currents. The accelerator system consists of a gridded electron gun, focusing coil, an electrostatic deflector system, Helmholtz coils, a standing wave side coupled S-band linac, a 2.6 MW peak power magnetron, an RF circulator, a fast toroid, vacuum system and a PLC/PC control system. The system has been operated at repetition rates up to 250pps. The design, simulations and experimental results from the accelerator system are presented in this paper.
Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried
2018-03-01
This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.
Oguntunde, Philip G; Lischeid, Gunnar; Dietrich, Ottfried
2018-03-01
This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease (P 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
2016-06-01
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
Directory of Open Access Journals (Sweden)
Nurbaiti
2017-03-01
Full Text Available Science and technology have been rapidly evolved in some fields of knowledge, including mathematics. Such development can contribute to improvements on the learning process that encourage students and teachers to enhance their abilities and performances. In delivering the material on the linear equation system with two variables (SPLDV, the conventional teaching method where teachers become the center of the learning process is still well-practiced. This method would cause the students get bored and have difficulties to understand the concepts they are learning. Therefore, in order to the learning of SPLDV easy, an interesting, interactive media that the students and teachers can apply is necessary. This media is designed using GUI MATLAB and named as students’ electronic worksheets (e-LKS. This program is intended to help students in finding and understanding the SPLDV concepts more easily. This program is also expected to improve students’ motivation and creativity in learning the material. Based on the test using the System Usability Scale (SUS, the design of interactive mathematics learning media of the linear equation system with Two Variables (SPLDV gets grade B (excellent, meaning that this learning media is proper to be used for Junior High School students of grade VIII.
Effects of breathing patterns and light exercise on linear and nonlinear heart rate variability.
Weippert, Matthias; Behrens, Kristin; Rieger, Annika; Kumar, Mohit; Behrens, Martin
2015-08-01
Despite their use in cardiac risk stratification, the physiological meaning of nonlinear heart rate variability (HRV) measures is not well understood. The aim of this study was to elucidate effects of breathing frequency, tidal volume, and light exercise on nonlinear HRV and to determine associations with traditional HRV indices. R-R intervals, blood pressure, minute ventilation, breathing frequency, and respiratory gas concentrations were measured in 24 healthy male volunteers during 7 conditions: voluntary breathing at rest, and metronome guided breathing (0.1, 0.2 and 0.4 Hz) during rest, and cycling, respectively. The effect of physical load was significant for heart rate (HR; p < 0.001) and traditional HRV indices SDNN, RMSSD, lnLFP, and lnHFP (p < 0.01 for all). It approached significance for sample entropy (SampEn) and correlation dimension (D2) (p < 0.1 for both), while HRV detrended fluctuation analysis (DFA) measures DFAα1 and DFAα2 were not affected by load condition. Breathing did not affect HR but affected all traditional HRV measures. D2 was not affected by breathing; DFAα1 was moderately affected by breathing; and DFAα2, approximate entropy (ApEn), and SampEn were strongly affected by breathing. DFAα1 was strongly increased, whereas DFAα2, ApEn, and SampEn were decreased by slow breathing. No interaction effect of load and breathing pattern was evident. Correlations to traditional HRV indices were modest (r from -0.14 to -0.67, p < 0.05 to <0.01). In conclusion, while light exercise does not significantly affect short-time HRV nonlinear indices, respiratory activity has to be considered as a potential contributor at rest and during light dynamic exercise.
Linearly constrained minimax optimization
DEFF Research Database (Denmark)
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Directory of Open Access Journals (Sweden)
Pivatelli Flávio
2012-10-01
Full Text Available Abstract Background Decreased heart rate variability (HRV is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences] and frequency domains ultra-low frequency (ULF ≤ 0,003 Hz, very low frequency (VLF 0,003 – 0,04 Hz, low frequency (LF (0.04–0.15 Hz, and high frequency (HF (0.15–0.40 Hz as well as the ratio between LF and HF components (LF/HF. In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn, α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC. The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
Directory of Open Access Journals (Sweden)
Ouanani Mouloud
2018-01-01
Full Text Available This present paper summarizes the main results of incoherence of Spatial Variability of Ground Motion (SVGM component on the non-linear dynamic behavior of a Mila cable stayed bridge. The Hindy and Novack coherence model is developed for the present study in order to examine the SVGM on bridge responses, Nonlinear bridge responses are investigated in terms of transverse displacements and bending moments along the superstructure and substructure of the study bridge, as well as temporal variations of rotational ductility demands at the bridge piers ends under the incoherence SVGM component. The results are systematically compared with those obtained assuming uniform ground motion. As a general trend, it may be concluded that incoherence component of SVGM should be considered for the earthquake response assessments of cable-stayed bridges.
Donges, J. F.; Donner, R. V.; Marwan, N.; Breitenbach, S. F. M.; Rehfeld, K.; Kurths, J.
2015-05-01
The Asian monsoon system is an important tipping element in Earth's climate with a large impact on human societies in the past and present. In light of the potentially severe impacts of present and future anthropogenic climate change on Asian hydrology, it is vital to understand the forcing mechanisms of past climatic regime shifts in the Asian monsoon domain. Here we use novel recurrence network analysis techniques for detecting episodes with pronounced non-linear changes in Holocene Asian monsoon dynamics recorded in speleothems from caves distributed throughout the major branches of the Asian monsoon system. A newly developed multi-proxy methodology explicitly considers dating uncertainties with the COPRA (COnstructing Proxy Records from Age models) approach and allows for detection of continental-scale regime shifts in the complexity of monsoon dynamics. Several epochs are characterised by non-linear regime shifts in Asian monsoon variability, including the periods around 8.5-7.9, 5.7-5.0, 4.1-3.7, and 3.0-2.4 ka BP. The timing of these regime shifts is consistent with known episodes of Holocene rapid climate change (RCC) and high-latitude Bond events. Additionally, we observe a previously rarely reported non-linear regime shift around 7.3 ka BP, a timing that matches the typical 1.0-1.5 ky return intervals of Bond events. A detailed review of previously suggested links between Holocene climatic changes in the Asian monsoon domain and the archaeological record indicates that, in addition to previously considered longer-term changes in mean monsoon intensity and other climatic parameters, regime shifts in monsoon complexity might have played an important role as drivers of migration, pronounced cultural changes, and the collapse of ancient human societies.
Galster, Matthias; Avgeriou, Paris; Tofan, Dan
Context: Service-oriented architecture has become a widely used concept in software industry. However, we currently lack support for designing variability-intensive service-oriented systems. Such systems could be used in different environments, without the need to design them from scratch. To
Souza, Naiara M; Giacon, Thais R; Pacagnelli, Francis L; Barbosa, Marianne P C R; Valenti, Vitor E; Vanderlei, Luiz C M
2016-10-01
Autonomic diabetic neuropathy is one of the most common complications of type 1 diabetes mellitus, and studies using heart rate variability to investigate these individuals have shown inconclusive results regarding autonomic nervous system activation. Aims To investigate the dynamics of heart rate in young subjects with type 1 diabetes mellitus through nonlinear and linear methods of heart rate variability. We evaluated 20 subjects with type 1 diabetes mellitus and 23 healthy control subjects. We obtained the following nonlinear indices from the recurrence plot: recurrence rate (REC), determinism (DET), and Shanon entropy (ES), and we analysed indices in the frequency (LF and HF in ms2 and normalised units - nu - and LF/HF ratio) and time domains (SDNN and RMSSD), through analysis of 1000 R-R intervals, captured by a heart rate monitor. There were reduced values (p<0.05) for individuals with type 1 diabetes mellitus compared with healthy subjects in the following indices: DET, REC, ES, RMSSD, SDNN, LF (ms2), and HF (ms2). In relation to the recurrence plot, subjects with type 1 diabetes mellitus demonstrated lower recurrence and greater variation in their plot, inter-group and intra-group, respectively. Young subjects with type 1 diabetes mellitus have autonomic nervous system behaviour that tends to randomness compared with healthy young subjects. Moreover, this behaviour is related to reduced sympathetic and parasympathetic activity of the autonomic nervous system.
MHD flow of Powell-Eyring nanofluid over a non-linear stretching sheet with variable thickness
Directory of Open Access Journals (Sweden)
T. Hayat
Full Text Available This research explores the magnetohydrodynamic (MHD boundary layer flow of Powell-Eyring nanofluid past a non-linear stretching sheet of variable thickness. An electrically conducting fluid is considered under the characteristics of magnetic field applied transverse to the sheet. The mathematical expressions are accomplished via boundary layer access, Brownian motion and thermophoresis phenomena. The flow analysis is subjected to a recently established conditions requiring zero nanoparticles mass flux. Adequate transformations are implemented for the reduction of partial differential systems to the ordinary differential systems. Series solutions for the governing nonlinear flow of momentum, temperature and nanoparticles concentration have been executed. Physical interpretation of numerous parameters is assigned by graphical illustrations and tabular values. Moreover the numerical data of drag coefficient and local heat transfer rate are executed and discussed. It is investigated that higher wall thickness parameter results in the reduction of velocity distribution. Effects of thermophoresis parameter on temperature and concentration profiles are qualitatively similar. Both the temperature and concentration profiles are enhanced for higher values of thermophoresis parameter. Keywords: MHD, Variable thicked surface, Powell-Eyring nanofluid, Zero mass flux conditions
Directory of Open Access Journals (Sweden)
Wei Chen
2017-11-01
Full Text Available A landslide susceptibility map plays an essential role in urban and rural planning. The main purpose of this study is to establish a variable-weighted linear combination model (VWLC and assess its potential for landslide susceptibility mapping. Firstly, different objective methods are employed for data processing rather than the frequently-used subjective judgments: K-means clustering is used for classification; binarization is introduced to determine buffer length thresholds for locational elements (road, river, and fault; landslide area density is adopted as the contribution index; and a correlation analysis is conducted for suitable factor selection. Secondly, considering the dimension changes of the preference matrix varying with the different locations of the mapping cells, the variable weights of each optimal factor are determined based on the improved analytic hierarchy process (AHP. On this basis, the VWLC model is established and applied to regional landslide susceptibility mapping for the Shennongjia Forestry District, China, where shallow landslides frequently occur. The obtained map is then compared with a map using the traditional WLC, and the results of the comparison show that VWLC is more reasonable, with a higher accuracy, and can be used anywhere that has the same or similar geological and topographical conditions.
Constraints on the Within Season and Between Year Variability of the North Residual Cap from MGS-TES
Calvin, W. M.; Titus, T. N.; Mahoney, S. A.
2003-01-01
There is a long history of telescopic and spacecraft observations of the polar regions of Mars. The finely laminated ice deposits and surrounding layered terrains are commonly thought to contain a record of past climate conditions and change. Understanding the basic nature of the deposits and their mineral and ice constituents is a continued focus of current and future orbited missions. Unresolved issues in Martian polar science include a) the unusual nature of the CO2 ice deposits ("Swiss Cheese", "slab ice" etc.) b) the relationship of the ice deposits to underlying layered units (which differs from the north to the south), c) understanding the seasonal variations and their connections to the finely laminated units observed in high-resolution images and d) the relationship of dark materials in the wind-swept lanes and reentrant valleys to the surrounding dark dune and surface materials. Our work focuses on understanding these issues in relationship to the north residual ice cap. Recent work using Mars Global Surveyor (MGS) data sets have described evolution of the seasonal CO2 frost deposits. In addition, the north polar residual ice cap exhibits albedo variations between Mars years and within the summer season. The Thermal Emission Spectrometer (TES) data set can augment these observations providing additional constraints such as temperature evolution and spectral properties associated with ice and rocky materials. Exploration of these properties is the subject of our current study.
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Stueve, Kirk M; Isaacs, Rachel E; Tyrrell, Lucy E; Densmore, Roseann V
2011-02-01
Throughout interior Alaska (U.S.A.), a gradual warming trend in mean monthly temperatures occurred over the last few decades (approximatlely 2-4 degrees C). The accompanying increases in woody vegetation at many alpine treeline (hereafter treeline) locations provided an opportunity to examine how biotic and abiotic local site conditions interact to control tree establishment patterns during warming. We devised a landscape ecological approach to investigate these relationships at an undisturbed treeline in the Alaska Range. We identified treeline changes between 1953 (aerial photography) and 2005 (satellite imagery) in a geographic information system (GIS) and linked them with corresponding local site conditions derived from digital terrain data, ancillary climate data, and distance to 1953 trees. Logistic regressions enabled us to rank the importance of local site conditions in controlling tree establishment. We discovered a spatial transition in the importance of tree establishment controls. The biotic variable (proximity to 1953 trees) was the most important tree establishment predictor below the upper tree limit, providing evidence of response lags with the abiotic setting and suggesting that tree establishment is rarely in equilibrium with the physical environment or responding directly to warming. Elevation and winter sun exposure were important predictors of tree establishment at the upper tree limit, but proximity to trees persisted as an important tertiary predictor, indicating that tree establishment may achieve equilibrium with the physical environment. However, even here, influences from the biotic variable may obscure unequivocal correlations with the abiotic setting (including temperature). Future treeline expansion will likely be patchy and challenging to predict without considering the spatial variability of influences from biotic and abiotic local site conditions.
Design with Nonlinear Constraints
Tang, Chengcheng
2015-12-10
Most modern industrial and architectural designs need to satisfy the requirements of their targeted performance and respect the limitations of available fabrication technologies. At the same time, they should reflect the artistic considerations and personal taste of the designers, which cannot be simply formulated as optimization goals with single best solutions. This thesis aims at a general, flexible yet e cient computational framework for interactive creation, exploration and discovery of serviceable, constructible, and stylish designs. By formulating nonlinear engineering considerations as linear or quadratic expressions by introducing auxiliary variables, the constrained space could be e ciently accessed by the proposed algorithm Guided Projection, with the guidance of aesthetic formulations. The approach is introduced through applications in different scenarios, its effectiveness is demonstrated by examples that were difficult or even impossible to be computationally designed before. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application is extended to developable surfaces including origami with curved creases. Finally, general approaches to extend hard constraints and soft energies are discussed, followed by a concluding remark outlooking possible future studies.
DEFF Research Database (Denmark)
Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca
2010-01-01
We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....
Discamps, Emmanuel; Jaubert, Jacques; Bachellerie, François
2011-09-01
The evolution in the selection of prey made by past humans, especially the Neandertals and the first anatomically modern humans, has been widely debated. Between Marine Isotope Stages (MIS) 5 and 3, the accuracy of absolute dating is still insufficient to precisely correlate paleoclimatic and archaeological data. It is often difficult, therefore, to estimate to what extent changes in species procurement are correlated with either climate fluctuations or deliberate cultural choices in terms of subsistence behavior. Here, the full development of archeostratigraphy and Bayesian statistical analysis of absolute dates allows the archeological and paleoclimatic chronologies to be compared. The variability in hunted fauna is investigated using multivariate statistical analysis of quantitative faunal lists of 148 assemblages from 39 archeological sequences from MIS 5 through MIS 3. Despite significant intra-technocomplex variability, it is possible to identify major shifts in the human diet during these stages. The integration of archeological data, paleoclimatic proxies and the ecological characteristics of the different species of prey shows that the shifts in large game hunting can be explained by an adaptation of the human groups to climatic fluctuations. However, even if Middle and Early Upper Paleolithic men adapted to changes in their environment and to contrasting landscapes, they ultimately belonged to the ecosystems of the past and were limited by environmental constraints.
NP-Hardness of optimizing the sum of Rational Linear Functions over an Asymptotic-Linear-Program
Chermakani, Deepak Ponvel
2012-01-01
We convert, within polynomial-time and sequential processing, an NP-Complete Problem into a real-variable problem of minimizing a sum of Rational Linear Functions constrained by an Asymptotic-Linear-Program. The coefficients and constants in the real-variable problem are 0, 1, -1, K, or -K, where K is the time parameter that tends to positive infinity. The number of variables, constraints, and rational linear functions in the objective, of the real-variable problem is bounded by a polynomial ...
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
International Nuclear Information System (INIS)
Knutson, Heather A.; Madhusudhan, Nikku; Cowan, Nicolas B.; Christiansen, Jessie L.; Agol, Eric; Deming, Drake; Desert, Jean-Michel; Charbonneau, David; Henry, Gregory W.; Homeier, Derek; Laughlin, Gregory; Langton, Jonathan; Seager, Sara
2011-01-01
activity, in which case it may not be feasible to characterize the planet's transmission spectrum using broadband photometry obtained over multiple epochs. These observations serve to illustrate the challenges associated with transmission spectroscopy of planets orbiting late-type stars; we expect that other systems, such as GJ 1214, may display comparably variable transit depths. We compare the limb-darkening coefficients predicted by PHOENIX and ATLAS stellar atmosphere models and discuss the effect that these coefficients have on the measured planet-star radius ratios given GJ 436b's near-grazing transit geometry. Our measured 8 μm secondary eclipse depths are consistent with a constant value, and we place a 1σ upper limit of 17% on changes in the planet's dayside flux in this band. These results are consistent with predictions from general circulation models for this planet, which find that the planet's dayside flux varies by a few percent or less in the 8 μm band. Averaging over the eleven visits gives us an improved estimate of 0.0452% ± 0.0027% for the secondary eclipse depth; we also examine residuals from the eclipse ingress and egress and place an upper limit on deviations caused by a non-uniform surface brightness for GJ 436b. We combine timing information from our observations with previously published data to produce a refined orbital ephemeris and determine that the best-fit transit and eclipse times are consistent with a constant orbital period. We find that the secondary eclipse occurs at a phase of 0.58672 ± 0.00017, corresponding to ecos (ω) = 0.13754 ± 0.00027, where e is the planet's orbital eccentricity and ω is the longitude of pericenter. We also present improved estimates for other system parameters, including the orbital inclination, a/R * , and the planet-star radius ratio.
International Nuclear Information System (INIS)
Levine, R.D.
1979-01-01
The reaction rate constant is expressed as Z exp(-G/sub a//RT). Z is the binary collision frequency. G/sub a/, the free energy of activation, is shown to be the difference between the free energy of the reactive reactants and the free energy of all reactants. The results are derived from both a statistical mechanical and a collision theoretic point of view. While the later is more suitable for an ab-initio computation of the reaction rate, it is the former that lends itself to the search of systematics and of correlations and to compaction of data. Different thermodynamic-like routes to the characterization of G/sub a/ are thus explored. The two most promising ones appear to be the use of thermodynamic type cycles and the changes of dependent variables using the Legendre transform technique. The dependence of G/sub a/ on ΔG 0 , the standard free energy change in the reaction, is examined from the later point of view. It is shown that one can rigorously express this dependence as G/sub a/ = αΔG 0 + G/sub a/ 0 M(α). Here α is the Bronsted slope, α = -par. delta ln k(T)/par. delta(ΔG 0 /RT), G/sub a/ 0 is independent of ΔG 0 and M(α), the Legendre transform of G/sub a/, is a function only of α. For small changes in ΔG 0 , the general result reduces to the familiar ''linear'' free energy relation delta G/sub a/ = α delta ΔG 0 . It is concluded from general considerations that M(α) is a symmetric, convex function of α and hence that α is a monotonically increasing function of ΔG 0 . Experimental data appear to conform well to the form α = 1/[1 + exp(-ΔG 0 /G/sub s/ 0 )]. A simple interpretation of the ΔG 0 dependence of G/sub a/, based on an interpolation of the free energy from that of the reagents to that of the products, is offered. 4 figures, 69 references
Directory of Open Access Journals (Sweden)
V.C. Kunz
2012-05-01
Full Text Available The objectives of this study were to evaluate and compare the use of linear and nonlinear methods for analysis of heart rate variability (HRV in healthy subjects and in patients after acute myocardial infarction (AMI. Heart rate (HR was recorded for 15 min in the supine position in 10 patients with AMI taking β-blockers (aged 57 ± 9 years and in 11 healthy subjects (aged 53 ± 4 years. HRV was analyzed in the time domain (RMSSD and RMSM, the frequency domain using low- and high-frequency bands in normalized units (nu; LFnu and HFnu and the LF/HF ratio and approximate entropy (ApEn were determined. There was a correlation (P < 0.05 of RMSSD, RMSM, LFnu, HFnu, and the LF/HF ratio index with the ApEn of the AMI group on the 2nd (r = 0.87, 0.65, 0.72, 0.72, and 0.64 and 7th day (r = 0.88, 0.70, 0.69, 0.69, and 0.87 and of the healthy group (r = 0.63, 0.71, 0.63, 0.63, and 0.74, respectively. The median HRV indexes of the AMI group on the 2nd and 7th day differed from the healthy group (P < 0.05: RMSSD = 10.37, 19.95, 24.81; RMSM = 23.47, 31.96, 43.79; LFnu = 0.79, 0.79, 0.62; HFnu = 0.20, 0.20, 0.37; LF/HF ratio = 3.87, 3.94, 1.65; ApEn = 1.01, 1.24, 1.31, respectively. There was agreement between the methods, suggesting that these have the same power to evaluate autonomic modulation of HR in both AMI patients and healthy subjects. AMI contributed to a reduction in cardiac signal irregularity, higher sympathetic modulation and lower vagal modulation.
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Hansen, Keir T; Cronin, John B; Newton, Michael J
2011-03-01
The purpose of this study was to determine the between day reliability of power-time measures calculated with data collected using the linear position transducer or the force plate independently, or a combination of the two technologies. Twenty-five male rugby union players performed three jump squats on two occasions one week apart. Ground reaction forces were measured via a force plate and position data were collected using a linear position transducer. From these data, a number of power-time variables were calculated for each method. The force plate, linear position transducer and a combined method were all found to be a reliable means of measuring peak power (ICC = 0.87-0.95, CV = 3.4%-8.0%). The absolute consistency of power-time measures varied between methods (CV = 8.0%-53.4%). Relative consistency of power-time measures was generally comparable between methods and measures, and for many variables was at an acceptable level (ICC = 0.77-0.94). Although a number of time-dependent power variables can be reliably calculated from data acquired from the three methods investigated, the reliability of a number of these measures is below that which is acceptable for use in research and for practical applications.
Condensation with two constraints and disorder
Barré, J.; Mangeolle, L.
2018-04-01
We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.
A Hamiltonian structure for the linearized Einstein vacuum field equations
International Nuclear Information System (INIS)
Torres del Castillo, G.F.
1991-01-01
By considering the Einstein vacuum field equations linearized about the Minkowski metric, the evolution equations for the gauge-invariant quantities characterizing the gravitational field are written in a Hamiltonian form. A Poisson bracket between functionals of the field, compatible with the constraints satisfied by the field variables, is obtained (Author)
Jung, Jeki; Oak, Jeong-Jung; Kim, Yong-Hwan; Cho, Yi Je; Park, Yong Ho
2017-11-01
The aim of this study was to investigate the transition of wear behavior for pure aluminum and extruded aluminum alloy 2024-T4 (AA2024-T4). The wear test was carried using a ball-on-disc wear testing machine at various vertical loads and linear speeds. The transition of wear behaviors was analyzed based on the microstructure, wear tracks, wear cross-section, and wear debris. The critical wear rates for each material are occurred at lower linear speed for each vertical load. The transition of wear behavior was observed in which abrasion wears with the generation of an oxide layer, fracture of oxide layer, adhesion wear, severe adhesion wear, and the generation of seizure occurred in sequence. In case of the pure aluminum, the change of wear debris occurred in the order of blocky, flake, and needle-like debris. Cutting chip, flake-like, and coarse flake-like debris was occurred in sequence for the extruded AA2024-T4. The transition in the wear behavior of extruded AA2024-T4 occurred slower than in pure aluminum.
Directory of Open Access Journals (Sweden)
Gabriel Amador
2016-05-01
Full Text Available In this work, after reviewing two different ways to solve Riccati systems, we are able to present an extensive list of families of integrable nonlinear Schrödinger (NLS equations with variable coefficients. Using Riccati equations and similarity transformations, we are able to reduce them to the standard NLS models. Consequently, we can construct bright-, dark- and Peregrine-type soliton solutions for NLS with variable coefficients. As an important application of solutions for the Riccati equation with parameters, by means of computer algebra systems, it is shown that the parameters change the dynamics of the solutions. Finally, we test numerical approximations for the inhomogeneous paraxial wave equation by the Crank-Nicolson scheme with analytical solutions found using Riccati systems. These solutions include oscillating laser beams and Laguerre and Gaussian beams.
Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M
2018-03-01
Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Design and simulation of a short, variable-energy 4 to 10 MV S-band linear accelerator waveguide.
Baillie, Devin; Fallone, B Gino; Steciw, Stephen
2017-06-01
To modify a previously designed, short, 10 MV linac waveguide, so that it can produce any energy from 4 to 10 MV. The modified waveguide is designed to be a drop-in replacement for the 6 MV waveguide used in the author's current linear accelerator-magnetic resonance imager (Linac-MR). Using our group's previously designed short 10 MV linac as a starting point, the port was moved to the fourth cavity, the shift to the first coupling cavity was removed and a tuning cylinder added to the first coupling cavity. Each cavity was retuned using finite element method (FEM) simulations to resonate at the desired frequency. FEM simulations were used to determine the RF field distributions for various tuning cylinder depths, and electron trajectories were computed using a particle-in-cell model to determine the required RF power level and tuning cylinder depth to produce electron energy distributions for 4, 6, 8, and 10 MV photon beams. Monte Carlo simulations were then used to compare the depth dose profiles with those produced by published electron beam characteristics for Varian linacs. For each desired photon energy, the electron beam energy was within 0.5% of the target mean energy, the depth of maximum dose was within 1.5 mm of that produced by the Varian linac, and the ratio of dose at 10 cm depth to 20 cm depth was within 1%. A new 27.5 cm linear accelerator waveguide design capable of producing any photon energy between 4 and 10 MV has been simulated, however coupling port design and the implications of increased electron beam current at 10 MV remain to be investigated. For the specific cases of 4, 6, and 10 MV, this linac produces depth dose profiles similar to those produced by published spectra for Varian linacs. © 2017 American Association of Physicists in Medicine.
Deeken, Corey R; Thompson, Dominic M; Castile, Ryan M; Lake, Spencer P
2014-10-01
Over the past 60 years, the soft tissue repair market has grown to include over 50 types of hernia repair materials. Surgeons typically implant these materials in the orientation that provides maximum overlap of the mesh over the defect, with little regard for mechanical properties of the mesh material. If the characteristics of the meshes were better understood, an appropriate material could be identified for each patient, and meshes could be placed to optimize integration with neighboring tissue and avoid the mechanical mis-match that can lead to impaired graft fixation. The purpose of this study was to fully characterize and compare the mechanical properties of thirteen types of hernia repair materials via planar biaxial tensile testing. Equibiaxial (i.e., equal simultaneous loading in both directions) and strip biaxial (i.e., loading in one direction with the other direction held fixed) tests were utilized as physiologically relevant loading regimes. After applying a 0.1N pre-load on each arm, samples were subjected to equibiaxial cyclic loading using a triangular waveform to 2.5mm displacement on each arm at 0.1Hz for 10 cycles. Samples were then subjected to two strip biaxial tests (using the same cyclic loading protocol), where extension was applied along a single axis with the other axis held fixed. The thirteen evaluated mesh types exhibited a wide range of mechanical properties. Some were nearly isotropic (C-QUR™, DUALMESH(®), PHYSIOMESH™, and PROCEED(®)), while others were highly anisotropic (Ventralight™ ST, Bard™ Mesh, and Bard™ Soft Mesh). Some displayed nearly linear behavior (Bard™ Mesh), while others were non-linear with a long toe region followed by a sharp rise in tension (INFINIT(®)). These materials are currently utilized in clinical settings as if they are uniform and interchangeable, and clearly this is not the case. The mechanical properties most advantageous for successful hernia repairs are currently only vaguely described
Relaxations of semiring constraint satisfaction problems
CSIR Research Space (South Africa)
Leenen, L
2007-03-01
Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. In this framework preferences can be associated with tuples of values of the variable domains...
Wang, Haohan; Aragam, Bryon; Xing, Eric P
2018-04-26
A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Provost, J.
1984-01-01
Accurate tests of the theory of stellar structure and evolution are available from the Sun's observations. The solar constraints are reviewed, with a special attention to the recent progress in observing global solar oscillations. Each constraint is sensitive to a given region of the Sun. The present solar models (standard, low Z, mixed) are discussed with respect to neutrino flux, low and high degree five-minute oscillations and low degree internal gravity modes. It appears that actually there do not exist solar models able to fully account for all the observed quantities. (Auth.)
O'Brien, Ricky T; Stankovic, Uros; Sonke, Jan-Jakob; Keall, Paul J
2017-06-07
Four dimensional cone beam computed tomography (4DCBCT) uses a constant gantry speed and imaging frequency that are independent of the patient's breathing rate. Using a technique called respiratory motion guided 4DCBCT (RMG-4DCBCT), we have previously demonstrated that by varying the gantry speed and imaging frequency, in response to changes in the patient's real-time respiratory signal, the imaging dose can be reduced by 50-70%. RMG-4DCBCT optimally computes a patient specific gantry trajectory to eliminate streaking artefacts and projection clustering that is inherent in 4DCBCT imaging. The gantry trajectory is continuously updated as projection data is acquired and the patient's breathing changes. The aim of this study was to realise RMG-4DCBCT for the first time on a linear accelerator. To change the gantry speed in real-time a potentiometer under microcontroller control was used to adjust the current supplied to an Elekta Synergy's gantry motor. A real-time feedback loop was developed on the microcontroller to modulate the gantry speed and projection acquisition in response to the real-time respiratory signal so that either 40, RMG-4DCBCT 40 , or 60, RMG-4DCBCT 60 , uniformly spaced projections were acquired in 10 phase bins. Images of the CIRS dynamic Thorax phantom were acquired with sinusoidal breathing periods ranging from 2 s to 8 s together with two breathing traces from lung cancer patients. Image quality was assessed using the contrast to noise ratio (CNR) and edge response width (ERW). For the average patient, with a 3.8 s breathing period, the imaging time and image dose were reduced by 37% and 70% respectively. Across all respiratory rates, RMG-4DCBCT 40 had a CNR in the range of 6.5 to 7.5, and RMG-4DCBCT 60 had a CNR between 8.7 and 9.7, indicating that RMG-4DCBCT allows consistent and controllable CNR. In comparison, the CNR for conventional 4DCBCT drops from 20.4 to 6.2 as the breathing rate increases from 2 s to 8 s. With RMG-4DCBCT
Directory of Open Access Journals (Sweden)
M Taki
2017-05-01
Full Text Available Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level. In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness. It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing
International Nuclear Information System (INIS)
Jones, P.M.S.
1987-01-01
There are considerable incentives for the use of nuclear in preference to other sources for base load electricity generation in most of the developed world. These are economic, strategic, environmental and climatic. However, there are two potential constraints which could hinder the development of nuclear power to its full economic potential. These are public opinion and financial regulations which distort the nuclear economic advantage. The concerns of the anti-nuclear lobby are over safety, (especially following the Chernobyl accident), the management of radioactive waste, the potential effects of large scale exposure of the population to radiation and weapons proliferation. These are discussed. The financial constraint is over two factors, the availability of funds and the perception of cost, both of which are discussed. (U.K.)
Oh, Jihoon; Chae, Jeong-Ho
2018-04-01
Although heart rate variability (HRV) may be a crucial marker of mental health, how it is related to positive psychological factors (i.e. attitude to life and positive thinking) is largely unknown. Here we investigated the correlation of HRV linear and nonlinear dynamics with psychological scales that measured degree of optimism and happiness in patients with anxiety disorders. Results showed that low- to high-frequency HRV ratio (LF/HF) was increased and the HRV HF parameter was decreased in subjects who were more optimistic and who felt happier in daily living. Nonlinear analysis also showed that HRV dispersion and regulation were significantly correlated with the subjects' optimism and purpose in life. Our findings showed that HRV properties might be related to degree of optimistic perspectives on life and suggests that HRV markers of autonomic nervous system function could reflect positive human mind states.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2013-12-30
There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.
2018-04-01
There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.
Directory of Open Access Journals (Sweden)
Stefanie Andrea Hutka
2013-12-01
Full Text Available There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Constraints, Trade-offs and the Currency of Fitness.
Acerenza, Luis
2016-03-01
Understanding evolutionary trajectories remains a difficult task. This is because natural evolutionary processes are simultaneously affected by various types of constraints acting at the different levels of biological organization. Of particular importance are constraints where correlated changes occur in opposite directions, called trade-offs. Here we review and classify the main evolutionary constraints and trade-offs, operating at all levels of trait hierarchy. Special attention is given to life history trade-offs and the conflict between the survival and reproduction components of fitness. Cellular mechanisms underlying fitness trade-offs are described. At the metabolic level, a linear trade-off between growth and flux variability was found, employing bacterial genome-scale metabolic reconstructions. Its analysis indicates that flux variability can be considered as the currency of fitness. This currency is used for fitness transfer between fitness components during adaptations. Finally, a discussion is made regarding the constraints which limit the increase in the amount of fitness currency during evolution, suggesting that occupancy constraints are probably the main restrictions.
Directory of Open Access Journals (Sweden)
Juliana Petrini
2012-12-01
Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.O objetivo deste trabalho foi avaliar o grau de multicolinearidade e identificar as variáveis envolvidas na dependência linear em modelos aditivo-dominantes. Foram utilizados dados de peso ao nascimento (n=141.567, peso ao ano (n=58.124 e perímetro escrotal (n=20.371 de bovinos de corte compostos Montana Tropical. O diagnóstico de multicolinearidade foi baseado no fator de inflação de variância (VIF e no exame dos índices de condição e dos autovalores da matriz de correlações entre as variáveis explanatórias. O primeiro modelo estudado (RM incluiu o efeito fixo de classe de idade da mãe ao parto e
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2014-01-01
In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...
Forkuor, Gerald; Hounkpatin, Ozias K L; Welp, Gerhard; Thiel, Michael
2017-01-01
Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat), terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC), soil organic carbon (SOC) and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR), random forest regression (RFR), support vector machine (SVM), stochastic gradient boosting (SGB)-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June) were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices of redness
Directory of Open Access Journals (Sweden)
Charlotte Fiskum
2018-05-01
Full Text Available Background: Internalizing psychopathology and dysregulated negative affect are characterized by dysregulation in the autonomic nervous system and reduced heart rate variability (HRV due to increases in sympathetic activity alongside reduced vagal tone. The neurovisceral system is however, a complex nonlinear system, and nonlinear indices related to psychopathology are so far less studied in children. Essential nonlinear properties of a system can be found in two main domains: the informational domain and the invariant domain. sample entropy (SampEn is a much-used method from the informational domain, while detrended fluctuation analysis (DFA represents a widely-used method from the invariant domain. To see if nonlinear HRV can provide information beyond linear indices of autonomic activation, this study investigated SampEn and DFA as discriminators of internalizing psychopathology and negative affect alongside measures of vagally-mediated HRV and sympathetic activation.Material and Methods: Thirty-Two children with internalizing difficulties and 25 healthy controls (aged 9–13 were assessed with the Child Behavior Checklist and the Early Adolescent Temperament Questionnaire, Revised, giving an estimate of internalizing psychopathology, negative affect and effortful control, a protective factor against psychopathology. Five minute electrocardiogram and impedance cardiography recordings were collected during a resting baseline, giving estimates of SampEn, DFA short-term scaling exponent α1, root mean square of successive differences (RMSSD, and pre-ejection period (PEP. Between-group differences and correlations were assessed with parametric and non-parametric tests, and the relationships between cardiac variables, psychopathology and negative affect were assessed using generalized linear modeling.Results: SampEn and DFA were not significantly different between the groups. SampEn was weakly negatively related to heart rate (HR in the controls
Directory of Open Access Journals (Sweden)
Gerald Forkuor
Full Text Available Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat, terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC, soil organic carbon (SOC and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR, random forest regression (RFR, support vector machine (SVM, stochastic gradient boosting (SGB-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices
Zhang, Da; She, Jin; Yang, Jun; Yu, Mengsun
2015-06-01
Acute hypoxia activates several autonomic mechanisms, mainly in cardiovascular system and respiratory system. The influence of acute hypoxia on linear and nonlinear heart rate variability (HRV) has been studied, but the parameters in the process of hypoxia are still unclear. Although the changes of HRV in frequency domain are related to autonomic responses, how nonlinear dynamics change with the decrease of ambient atmospheric pressure is unknown either. Eight healthy male subjects were exposed to simulated altitude from sea level to 3600 m in 10 min. HRV parameters in frequency domain were analyzed by wavelet packet transform (Daubechies 4, 4 level) followed by Hilbert transform to assess the spectral power of modified low frequency (0.0625-0.1875 Hz, LFmod), modified high frequency (0.1875-0.4375 Hz, HFmod), and the LFmod/HFmod ratio in every 1 min. Nonlinear parameters were also quantified by sample entropy (SampEn) and short term fractal correlation exponent (α1) in the process. Hypoxia was associated with the depression of both LFmod and HFmod component. They were significantly lower than that at sea level at 3600 m and 2880 m respectively (both p nonlinear HRV parameters continuously in the process of hypoxia would be an effective way to evaluate the different regulatory mechanisms of autonomic nervous system.
Differential constraints and exact solutions of nonlinear diffusion equations
International Nuclear Information System (INIS)
Kaptsov, Oleg V; Verevkin, Igor V
2003-01-01
The differential constraints are applied to obtain explicit solutions of nonlinear diffusion equations. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the determining equations used in the search for classical Lie symmetries
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Stability Constraints for Robust Model Predictive Control
Directory of Open Access Journals (Sweden)
Amanda G. S. Ottoni
2015-01-01
Full Text Available This paper proposes an approach for the robust stabilization of systems controlled by MPC strategies. Uncertain SISO linear systems with box-bounded parametric uncertainties are considered. The proposed approach delivers some constraints on the control inputs which impose sufficient conditions for the convergence of the system output. These stability constraints can be included in the set of constraints dealt with by existing MPC design strategies, in this way leading to the “robustification” of the MPC.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
A Hamiltonian functional for the linearized Einstein vacuum field equations
International Nuclear Information System (INIS)
Rosas-RodrIguez, R
2005-01-01
By considering the Einstein vacuum field equations linearized about the Minkowski metric, the evolution equations for the gauge-invariant quantities characterizing the gravitational field are written in a Hamiltonian form by using a conserved functional as Hamiltonian; this Hamiltonian is not the analog of the energy of the field. A Poisson bracket between functionals of the field, compatible with the constraints satisfied by the field variables, is obtained. The generator of spatial translations associated with such bracket is also obtained
Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M.
2018-03-01
Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i
Handbook on linear motor application
International Nuclear Information System (INIS)
1988-10-01
This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.
International Nuclear Information System (INIS)
Mo, S.C.
1991-01-01
The successive linear programming technique is applied to obtain the optimum thermal flux in the reflector region of a high flux reactor using LEU fuel. The design variables are the reactor power, core radius and coolant channel thickness. The constraints are the cycle length, average heat flux and peak/average power density ratio. The characteristics of the optimum solutions with various constraints are discussed
Modern linear control design a time-domain approach
Caravani, Paolo
2013-01-01
This book offers a compact introduction to modern linear control design. The simplified overview presented of linear time-domain methodology paves the road for the study of more advanced non-linear techniques. Only rudimentary knowledge of linear systems theory is assumed - no use of Laplace transforms or frequency design tools is required. Emphasis is placed on assumptions and logical implications, rather than abstract completeness; on interpretation and physical meaning, rather than theoretical formalism; on results and solutions, rather than derivation or solvability. The topics covered include transient performance and stabilization via state or output feedback; disturbance attenuation and robust control; regional eigenvalue assignment and constraints on input or output variables; asymptotic regulation and disturbance rejection. Lyapunov theory and Linear Matrix Inequalities (LMI) are discussed as key design methods. All methods are demonstrated with MATLAB to promote practical use and comprehension. ...
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Ciancio, V.; Turrisi, E.; Kluitenberg, G.A.
1986-01-01
In a previous paper the propagation of linear longitudinal acoustic waves in isotropic media with shear and volume viscosity and a tensorial internal variable was considered and the expressions for the velocity and attenuation of the waves were obtained. In the present paper we investigate the
DEFF Research Database (Denmark)
Atamtürk, Alper; Muller, Laurent Flindt; Pisinger, David
2013-01-01
Motivated by addressing probabilistic 0-1 programs we study the conic quadratic knapsack polytope with generalized upper bound (GUB) constraints. In particular, we investigate separating and extending GUB cover inequalities. We show that, unlike in the linear case, determining whether a cover can...... be extended with a single variable is NP-hard. We describe and compare a number of exact and heuristic separation and extension algorithms which make use of the structure of the constraints. Computational experiments are performed for comparing the proposed separation and extension algorithms...
Constraint satisfaction problems CSP formalisms and techniques
Ghedira, Khaled
2013-01-01
A Constraint Satisfaction Problem (CSP) consists of a set of variables, a domain of values for each variable and a set of constraints. The objective is to assign a value for each variable such that all constraints are satisfied. CSPs continue to receive increased attention because of both their high complexity and their omnipresence in academic, industrial and even real-life problems. This is why they are the subject of intense research in both artificial intelligence and operations research. This book introduces the classic CSP and details several extensions/improvements of both formalisms a
Data assimilation with inequality constraints
Thacker, W. C.
If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.
Toward an automaton Constraint for Local Search
Directory of Open Access Journals (Sweden)
Jun He
2009-10-01
Full Text Available We explore the idea of using finite automata to implement new constraints for local search (this is already a successful technique in constraint-based global search. We show how it is possible to maintain incrementally the violations of a constraint and its decision variables from an automaton that describes a ground checker for that constraint. We establish the practicality of our approach idea on real-life personnel rostering problems, and show that it is competitive with the approach of [Pralong, 2007].
Notes on Timed Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia, Frank D.
2004-01-01
and program reactive systems. This note provides a comprehensive introduction to the background for and central notions from the theory of tccp. Furthermore, it surveys recent results on a particular tccp calculus, ntcc, and it provides a classification of the expressive power of various tccp languages.......A constraint is a piece of (partial) information on the values of the variables of a system. Concurrent constraint programming (ccp) is a model of concurrency in which agents (also called processes) interact by telling and asking information (constraints) to and from a shared store (a constraint...
International Nuclear Information System (INIS)
Avila, Ruben; Cabello-González, Ares; Ramos, Eduardo
2013-01-01
Highlights: • The Tau-Chebyshev method solves the linear fluid flow equations in spherical shells. • The fluid motion is driven by a central force proportional to the radial position. • The full Navier–Stokes equations are solved by the spectral element method. • The linear results are verified with the solution of the Navier–Stokes equations. • The solution of the linear problems is used to initiate non-linear calculations. -- Abstract: The onset of thermal convection in a non-rotating spherical shell is investigated using linear theory. The Tau-Chebyshev spectral method is used to integrate the linearized equations. We investigate the onset of thermal convection by considering two cases of the radial gravitational field (i) a local acceleration, acting radially inward, that is proportional to the distance from the center r, and (ii) a radial gravitational central force that is proportional to r −n . The former case has been widely analyzed in the literature, because it constitutes a simplified model that is usually used, in astrophysics and geophysics, and is studied here to validate the numerical method. The latter case was analyzed since the case n = 5 has been experimentally realized (by means of the dielectrophoretic effect) under microgravity condition, in the experimental container called GeoFlow, inside the International Space Station. Our study is aimed to clarify the role of (i) a radially inward central force (either proportional to r or to r −n ), (ii) a base conductive temperature distribution provided by either a uniform heat source or an imposed temperature difference between outer and inner spheres, and (iii) the aspect ratio η (ratio of the radii of the inner and outer spheres), on the critical Rayleigh number. In all cases the surface of the spheres has been assumed to be rigid. The results obtained with the linear theory based on the Tau-Chebyshev spectral method are compared with those of the integration of the full non-linear
ON Integrated Chance Constraints in ALM for Pension Funds
Youssouf A. F. Toukourou; Fran\\c{c}ois Dufresne
2015-01-01
We discuss the role of integrated chance constraints (ICC) as quantitative risk constraints in asset and liability management (ALM) for pension funds. We define two types of ICC: the one period integrated chance constraint (OICC) and the multiperiod integrated chance constraint (MICC). As their names suggest, the OICC covers only one period whereas several periods are taken into account with the MICC. A multistage stochastic linear programming model is therefore developed for this purpose and...
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Aihong Ren
2016-01-01
This paper is concerned with a class of fully fuzzy bilevel linear programming problems where all the coefficients and decision variables of both objective functions and the constraints are fuzzy numbers. A new approach based on deviation degree measures and a ranking function method is proposed to solve these problems. We first introduce concepts of the feasible region and the fuzzy optimal solution of a fully fuzzy bilevel linear programming problem. In order to obtain a fuzzy optimal solut...
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
Gregg, Robert D; Lenzi, Tommaso; Hargrove, Levi J; Sensinger, Jonathon W
2014-12-01
Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in which virtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtual constraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this control strategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novel control approach.
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Meaney, Christopher; Moineddin, Rahim
2014-01-24
In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the
Directory of Open Access Journals (Sweden)
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships
Cyclic labellings with constraints at two distances
Leese, R; Noble, S D
2004-01-01
Motivated by problems in radio channel assignment, we consider the vertex-labelling of graphs with non-negative integers. The objective is to minimise the span of the labelling, subject to constraints imposed at graph distances one and two. We show that the minimum span is (up to rounding) a piecewise linear function of the constraints, and give a complete specification, together with associated optimal assignments, for trees and cycles.
Directory of Open Access Journals (Sweden)
Paweł Sitek
2016-01-01
Full Text Available This paper presents a hybrid method for modeling and solving supply chain optimization problems with soft, hard, and logical constraints. Ability to implement soft and logical constraints is a very important functionality for supply chain optimization models. Such constraints are particularly useful for modeling problems resulting from commercial agreements, contracts, competition, technology, safety, and environmental conditions. Two programming and solving environments, mathematical programming (MP and constraint logic programming (CLP, were combined in the hybrid method. This integration, hybridization, and the adequate multidimensional transformation of the problem (as a presolving method helped to substantially reduce the search space of combinatorial models for supply chain optimization problems. The operation research MP and declarative CLP, where constraints are modeled in different ways and different solving procedures are implemented, were linked together to use the strengths of both. This approach is particularly important for the decision and combinatorial optimization models with the objective function and constraints, there are many decision variables, and these are summed (common in manufacturing, supply chain management, project management, and logistic problems. The ECLiPSe system with Eplex library was proposed to implement a hybrid method. Additionally, the proposed hybrid transformed model is compared with the MILP-Mixed Integer Linear Programming model on the same data instances. For illustrative models, its use allowed finding optimal solutions eight to one hundred times faster and reducing the size of the combinatorial problem to a significant extent.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Ghavami, Raoof; Najafi, Amir; Sajadi, Mohammad; Djannaty, Farhad
2008-09-01
In order to accurately simulate (13)C NMR spectra of hydroxy, polyhydroxy and methoxy substituted flavonoid a quantitative structure-property relationship (QSPR) model, relating atom-based calculated descriptors to (13)C NMR chemical shifts (ppm, TMS=0), is developed. A dataset consisting of 50 flavonoid derivatives was employed for the present analysis. A set of 417 topological, geometrical, and electronic descriptors representing various structural characteristics was calculated and separate multilinear QSPR models were developed between each carbon atom of flavonoid and the calculated descriptors. Genetic algorithm (GA) and multiple linear regression analysis (MLRA) were used to select the descriptors and to generate the correlation models. Analysis of the results revealed a correlation coefficient and root mean square error (RMSE) of 0.994 and 2.53ppm, respectively, for the prediction set.
Discrete-time BAM neural networks with variable delays
Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi
2007-07-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.
Discrete-time BAM neural networks with variable delays
International Nuclear Information System (INIS)
Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi
2007-01-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development
Sartabanov, Zhaishylyk A.
2017-09-01
A new approach to the study of periodic by all independent variables system of equations with a differentiation operator solutions along the direction of the main diagonal and with delayed arguments is proposed. The essence of the approach is to reduce the study of the multi-periodic solution of a linear inhomogeneous system to the construction of a solution of a simpler linear differential-difference system on the basis of the method of variating arbitrary constants of the complete integral of a homogeneous system. An integral representation of the unique multiperiodic solution of an inhomogeneous system is presented, expressed by a functional series of terms given by multiple repeated integrals. An estimate is given for the norm of a multi-periodic solution.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Diffusion Processes Satisfying a Conservation Law Constraint
Directory of Open Access Journals (Sweden)
J. Bakosi
2014-01-01
Full Text Available We investigate coupled stochastic differential equations governing N nonnegative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires a set of fluctuating variables to be nonnegative and (if appropriately normalized sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the nonnegativity and the unit-sum conservation law constraints are satisfied as the variables evolve in time. We investigate the consequences of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.
Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M
2018-05-11
The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
International Nuclear Information System (INIS)
Eckl, P.
1981-01-01
This study was initiated to investigate, whether there are any radiation-induced changes in DNA-content and if these changes can be repaired. Seeds of Vicia faba L. were grown in glass culture vessels. After 10 to 20 days the seedings were irradiated using a 1 C1 60 Co gammasource (90mrad/h and 33 rad/h) and a 5 mCi 252 Cf neutronsource (90 mrad/h). Both, neutron and gamma irradiation cause a reduction in nuclear DNA-content even after low doses (1 to 10 rad). The extent of depression is only depending on linear energy transfer. Parallel to the induced minimum in DNA-content, but shifted to higher doses, also the mitotic activity reaches a minimum. Whereas neutron irradiation results in a total stop after doses of 8 rad, gamma-irradiation only induces a depression of 80 %. Whith higher doses the mitotic activity increases again. The neutron-induced changes in DNA-content seem to be restored within 90 minutes after irradiation. No continuous increase could be found after low gamma-doses. Gamma-irradiation with higher dose rates ( 60 Co, 33 rad/h) causes a general decrease over the dose-range studied (100 to 1600 rad). Following doses of 100 rad the mitotic activity increases significantly. With higher doses the decrease is exponential. A dose-dependent mitotic delay could also be observed. As described by many authors, unscheduled DNA-synthesis (UDS) could not be detected in nuclei of Vicia faba. This indicates that an other system, perhaps acting in situ - at the damaged place - is responsible for the repair of radiation-induced thymine-damages. (Author)
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Financing Constraints and Entrepreneurship
William R. Kerr; Ramana Nanda
2009-01-01
Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia Posso, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...
Energy Technology Data Exchange (ETDEWEB)
Colin, G.
2006-10-15
Spark ignition engine control has become a major issue for the compliance with emissions legislation while ensuring driving comfort. Engine down-sizing is one of the promising ways to reduce fuel consumption and resulting CO{sub 2} emissions. Combining several existing technologies such as supercharging and variable valve actuation, down-sizing is a typical example of the problems encountered in Spark Ignited (SI) engine control: nonlinear systems with saturation of actuators; numerous major physical phenomena not measurable; limited computing time; control objectives (consumption, pollution, performance) often competing. A methodology of modelling and model-based control (internal model and predictive control) for these systems is also proposed and applied to the air path of the down-sized engine. Models, physicals and generics, are built to estimate in-cylinder air mass, residual burned gases mass and air scavenged mass from the intake to the exhaust. The complete and generic engine torque control architecture for the turbo-charged SI engine with variable cam-shaft timing was tested in simulation and experimentally (on engine and vehicle). These tests show that new possibilities are offered in order to decrease pollutant emissions and optimize engine efficiency. (author)
Directory of Open Access Journals (Sweden)
Silke Dornieden
Full Text Available Alzheimer's disease (AD is a progressive neurodegenerative disorder with devastating effects. Currently, therapeutic options are limited to symptomatic treatment. For more than a decade, research focused on immunotherapy for the causal treatment of AD. However, clinical trials with active immunization using Aβ encountered severe complications, for example meningoencephalitis. Consequently, attention focused on passive immunization using antibodies. As an alternative to large immunoglobulins (IgGs, Aβ binding single-chain variable fragments (scFvs were used for diagnostic and therapeutic research approaches. scFvs can be expressed in E. coli and may provide improved pharmacokinetic properties like increased blood-brain barrier permeability or reduced side-effects in vivo. In this study, we constructed an scFv from an Aβ binding IgG, designated IC16, which binds the N-terminal region of Aβ (Aβ(1-8. scFv-IC16 was expressed in E. coli, purified and characterized with respect to its interaction with different Aβ species and its influence on Aβ fibril formation. We were able to show that scFv-IC16 strongly influenced the aggregation behavior of Aβ and could be applied as an Aβ detection probe for plaque staining in the brains of transgenic AD model mice. The results indicate potential for therapy and diagnosis of AD.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
On the canonical treatment of Lagrangian constraints
International Nuclear Information System (INIS)
Barbashov, B.M.
2001-01-01
The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a special Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge
On the canonical treatment of Lagrangian constraints
International Nuclear Information System (INIS)
Barbashov, B.M.
2001-01-01
The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a specific Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge
Momota, Yukihiro; Takano, Hideyuki; Kani, Koichi; Matsumoto, Fumihiro; Motegi, Katsumi; Aota, Keiko; Yamamura, Yoshiko; Omori, Mayuko; Tomioka, Shigemasa; Azuma, Masayuki
2013-03-01
Burning mouth syndrome (BMS) is characterized by the following subjective complaints without distinct organic changes: burning sensation in mouth or chronic pain of tongue. BMS is also known as glossodynia; both terms are used equivalently in Japan. Although the real cause of BMS is still unknown, it has been pointed out that BMS is related to some autonomic abnormality, and that stellate ganglion near-infrared irradiation (SGR) corrects the autonomic abnormality. Frequency analysis of heart rate variability (HRV) is expected to be useful for assessing autonomic abnormality. This study investigated whether frequency analysis of HRV could reveal autonomic abnormality associated with BMS, and whether autonomic changes were corrected after SGR. Eight subjects received SGR; the response to SGR was assessed by frequency analysis of HRV. No significant difference of autonomic activity concerning low-frequency (LF) norm, high-frequency (HF) norm, and low-frequency/high-frequency (LF/HF) was found between SGR effective and ineffective groups. Therefore, we proposed new parameters: differential normalized low frequency (D LF norm), differential normalized high frequency (D HF norm), and differential low-frequency/high-frequency (D LF/HF), which were defined as differentials between original parameters just before and after SGR. These parameters as indexes of responsiveness of autonomic nervous system (ANS) revealed autonomic changes in BMS, and BMS seems to be related to autonomic instability rather than autonomic imbalance. Frequency analysis of HRV revealed the autonomic instability associated with BMS and enabled tracing of autonomic changes corrected with SGR. It is suggested that frequency analysis of HRV is very useful in follow up of BMS and for determination of the therapeutic efficacy of SGR. Wiley Periodicals, Inc.
Pralle, R S; Weigel, K W; White, H M
2018-05-01
Prediction of postpartum hyperketonemia (HYK) using Fourier transform infrared (FTIR) spectrometry analysis could be a practical diagnostic option for farms because these data are now available from routine milk analysis during Dairy Herd Improvement testing. The objectives of this study were to (1) develop and evaluate blood β-hydroxybutyrate (BHB) prediction models using multivariate linear regression (MLR), partial least squares regression (PLS), and artificial neural network (ANN) methods and (2) evaluate whether milk FTIR spectrum (mFTIR)-based models are improved with the inclusion of test-day variables (mTest; milk composition and producer-reported data). Paired blood and milk samples were collected from multiparous cows 5 to 18 d postpartum at 3 Wisconsin farms (3,629 observations from 1,013 cows). Blood BHB concentration was determined by a Precision Xtra meter (Abbot Diabetes Care, Alameda, CA), and milk samples were analyzed by a privately owned laboratory (AgSource, Menomonie, WI) for components and FTIR spectrum absorbance. Producer-recorded variables were extracted from farm management software. A blood BHB ≥1.2 mmol/L was considered HYK. The data set was divided into a training set (n = 3,020) and an external testing set (n = 609). Model fitting was implemented with JMP 12 (SAS Institute, Cary, NC). A 5-fold cross-validation was performed on the training data set for the MLR, PLS, and ANN prediction methods, with square root of blood BHB as the dependent variable. Each method was fitted using 3 combinations of variables: mFTIR, mTest, or mTest + mFTIR variables. Models were evaluated based on coefficient of determination, root mean squared error, and area under the receiver operating characteristic curve. Four models (PLS-mTest + mFTIR, ANN-mFTIR, ANN-mTest, and ANN-mTest + mFTIR) were chosen for further evaluation in the testing set after fitting to the full training set. In the cross-validation analysis, model fit was greatest for ANN, followed
Directory of Open Access Journals (Sweden)
Sukhpreet Kaur Sidhu
2014-01-01
Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...
Evaluating Distributed Timing Constraints
DEFF Research Database (Denmark)
Kristensen, C.H.; Drejer, N.
1994-01-01
In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....
DEFF Research Database (Denmark)
Michelsen, Aage U.
2004-01-01
Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....
Seismological Constraints on Geodynamics
Lomnitz, C.
2004-12-01
Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over
Teaching Australian Football in Physical Education: Constraints Theory in Practice
Pill, Shane
2013-01-01
This article outlines a constraints-led process of exploring, modifying, experimenting, adapting, and developing game appreciation known as Game Sense (Australian Sports Commission, 1997; den Duyn, 1996, 1997) for the teaching of Australian football. The game acts as teacher in this constraints-led process. Rather than a linear system that…
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Directory of Open Access Journals (Sweden)
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
Route constraints model based on polychromatic sets
Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu
2018-03-01
With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.
Pey, Jon; Rubio, Angel; Theodoropoulos, Constantinos; Cascante, Marta; Planes, Francisco J
2012-07-01
Constraints-based modeling is an emergent area in Systems Biology that includes an increasing set of methods for the analysis of metabolic networks. In order to refine its predictions, the development of novel methods integrating high-throughput experimental data is currently a key challenge in the field. In this paper, we present a novel set of constraints that integrate tracer-based metabolomics data from Isotope Labeling Experiments and metabolic fluxes in a linear fashion. These constraints are based on Elementary Carbon Modes (ECMs), a recently developed concept that generalizes Elementary Flux Modes at the carbon level. To illustrate the effect of our ECMs-based constraints, a Flux Variability Analysis approach was applied to a previously published metabolic network involving the main pathways in the metabolism of glucose. The addition of our ECMs-based constraints substantially reduced the under-determination resulting from a standard application of Flux Variability Analysis, which shows a clear progress over the state of the art. In addition, our approach is adjusted to deal with combinatorial explosion of ECMs in genome-scale metabolic networks. This extension was applied to infer the maximum biosynthetic capacity of non-essential amino acids in human metabolism. Finally, as linearity is the hallmark of our approach, its importance is discussed at a methodological, computational and theoretical level and illustrated with a practical application in the field of Isotope Labeling Experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Solution of underdetermined systems of equations with gridded a priori constraints.
Stiros, Stathis C; Saltogianni, Vasso
2014-01-01
The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
Simulating non-holonomic constraints within the LCP-based simulation framework
DEFF Research Database (Denmark)
Ellekilde, Lars-Peter; Petersen, Henrik Gordon
2006-01-01
be incorporated directly, and derive formalism for how the non-holonomic contact constraints can be modelled as a combination of non-holonomic equality constraints and ordinary contacts constraints. For each of these three we are able to guarantee solvability, when using Lemke's algorithm. A number of examples......In this paper, we will extend the linear complementarity problem-based rigid-body simulation framework with non-holonomic constraints. We consider three different types of such, namely equality, inequality and contact constraints. We show how non-holonomic equality and inequality constraints can...... are included to demonstrate the non-holonomic constraints. Udgivelsesdato: Marts...
Latin hypercube sampling with inequality constraints
International Nuclear Information System (INIS)
Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.
2010-01-01
In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)
Linear Logistic Test Modeling with R
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Resources, constraints and capabilities
Dhondt, S.; Oeij, P.R.A.; Schröder, A.
2018-01-01
Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures
Design with Nonlinear Constraints
Tang, Chengcheng
2015-01-01
. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application
Constraints on stellar evolution from pulsations
International Nuclear Information System (INIS)
Cox, A.N.
1984-01-01
Consideration of the many types of intrinsic variable stars, that is, those that pulsate, reveals that perhaps a dozen classes can indicate some constraints that affect the results of stellar evolution calculations, or some interpretations of observations. Many of these constraints are not very strong or may not even be well defined yet. The author discusses the case for six classes: classical Cepheids with their measured Wesselink radii, the observed surface effective temperatures of the known eleven double-mode Cepheids, the pulsation periods and measured surface effective temperatures of three R CrB variables, the delta Scuti variable VZ Cnc with a very large ratio of its two observed periods, the nonradial oscillations of the Sun, and the period ratios of the newly discovered double-mode RR Lyrae variables. (Auth.)
Dynamics and causality constraints
International Nuclear Information System (INIS)
Sousa, Manoelito M. de
2001-04-01
The physical meaning and the geometrical interpretation of causality implementation in classical field theories are discussed. Causality in field theory are kinematical constraints dynamically implemented via solutions of the field equation, but in a limit of zero-distance from the field sources part of these constraints carries a dynamical content that explains old problems of classical electrodynamics away with deep implications to the nature of physicals interactions. (author)
Variable linear motion cycle monitoring device
International Nuclear Information System (INIS)
Ezekoye, L.I.; Cavada, D.R.
1989-01-01
A pressurized water reactor nuclear power plant is described including feedwater flow control valve having a valve member which can be moved varying amounts in discrete movements in either direction between an open and a closed position to control feedwater flow; and apparatus for recording cycles of reciprocal movement of the valve member. The apparatus consists of: a travel translator member, means connecting the travel translator member to the valve member for reciprocal rectilinear movement in direction corresponding to, and for a distance proportional to, the movement of the valve member, a shaft member for rotation its longitudinal axis, means for limiting angular rotation of the shaft member in each direction; means for counting cycles of reciprocal rotation of the shaft member between rotating limits set by the limiting mean; a wheel mounted on the shaft member and in engagement with the travel translator member the wheel translating movement of the travel translator into rotation of the shaft member between the limits of rotation of the shaft member set by the limit means, and means permitting relative movement between the wheel and one of the member once the limits of rotation of the shaft member are reached and the travel translator member continues movement in the same direction
Momentum constraint relaxation
International Nuclear Information System (INIS)
Marronetti, Pedro
2006-01-01
Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly
International Nuclear Information System (INIS)
Sugier, A.
2003-01-01
The selected new constraints should be consistent with the scale of concern i.e. be expressed roughly as fractions or multiples of the average annual background. They should take into account risk considerations and include the values of the currents limits, constraints and other action levels. The recommendation is to select four leading values for the new constraints: 500 mSv ( single event or in a decade) as a maximum value, 0.01 mSv/year as a minimum value; and two intermediate values: 20 mSv/year and 0.3 mSv/year. This new set of dose constraints, representing basic minimum standards of protection for the individuals taking into account the specificity of the exposure situations are thus coherent with the current values which can be found in ICRP Publications. A few warning need however to be noticed: There is no more multi sources limit set by ICRP. The coherence between the proposed value of dose constraint (20 mSv/year) and the current occupational dose limit of 20 mSv/year is valid only if the workers are exposed to one single source. When there is more than one source, it will be necessary to apportion. The value of 1000 mSv lifetimes used for relocation can be expressed into annual dose, which gives approximately 10 mSv/year and is coherent with the proposed dose constraint. (N.C.)
Reinforcement, Behavior Constraint, and the Overjustification Effect.
Williams, Bruce W.
1980-01-01
Four levels of the behavior constraint-reinforcement variable were manipulated: attractive reward, unattractive reward, request to perform, and a no-reward control. Only the unattractive reward and request groups showed the performance decrements that suggest the overjustification effect. It is concluded that reinforcement does not cause the…
Optimal traffic control in highway transportation networks using linear programming
Li, Yanning
2014-06-01
This article presents a framework for the optimal control of boundary flows on transportation networks. The state of the system is modeled by a first order scalar conservation law (Lighthill-Whitham-Richards PDE). Based on an equivalent formulation of the Hamilton-Jacobi PDE, the problem of controlling the state of the system on a network link in a finite horizon can be posed as a Linear Program. Assuming all intersections in the network are controllable, we show that the optimization approach can be extended to an arbitrary transportation network, preserving linear constraints. Unlike previously investigated transportation network control schemes, this framework leverages the intrinsic properties of the Halmilton-Jacobi equation, and does not require any discretization or boolean variables on the link. Hence this framework is very computational efficient and provides the globally optimal solution. The feasibility of this framework is illustrated by an on-ramp metering control example.
Granato, Gregory E.
2006-01-01
The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and
Directory of Open Access Journals (Sweden)
Animesh Biswas
2016-04-01
Full Text Available This paper deals with fuzzy goal programming approach to solve fuzzy linear bilevel integer programming problems with fuzzy probabilistic constraints following Pareto distribution and Frechet distribution. In the proposed approach a new chance constrained programming methodology is developed from the view point of managing those probabilistic constraints in a hybrid fuzzy environment. A method of defuzzification of fuzzy numbers using ?-cut has been adopted to reduce the problem into a linear bilevel integer programming problem. The individual optimal value of the objective of each DM is found in isolation to construct the fuzzy membership goals. Finally, fuzzy goal programming approach is used to achieve maximum degree of each of the membership goals by minimizing under deviational variables in the decision making environment. To demonstrate the efficiency of the proposed approach, a numerical example is provided.
The MarCon Algorithm: A Systematic Market Approach to Distributed Constraint Problems
National Research Council Canada - National Science Library
Parunak, H. Van Dyke
1998-01-01
.... Each variable integrates this information from the constraints interested in it and provides feedback that enables the constraints to shrink their sets of assignments until they converge on a solution...
Misconceptions and constraints
International Nuclear Information System (INIS)
Whitten, M.; Mahon, R.
2005-01-01
In theory, the sterile insect technique (SIT) is applicable to a wide variety of invertebrate pests. However, in practice, the approach has been successfully applied to only a few major pests. Chapters in this volume address possible reasons for this discrepancy, e.g. Klassen, Lance and McInnis, and Robinson and Hendrichs. The shortfall between theory and practice is partly due to the persistence of some common misconceptions, but it is mainly due to one constraint, or a combination of constraints, that are biological, financial, social or political in nature. This chapter's goal is to dispel some major misconceptions, and view the constraints as challenges to overcome, seeing them as opportunities to exploit. Some of the common misconceptions include: (1) released insects retain residual radiation, (2) females must be monogamous, (3) released males must be fully sterile, (4) eradication is the only goal, (5) the SIT is too sophisticated for developing countries, and (6) the SIT is not a component of an area-wide integrated pest management (AW-IPM) strategy. The more obvious constraints are the perceived high costs of the SIT, and the low competitiveness of released sterile males. The perceived high up-front costs of the SIT, their visibility, and the lack of private investment (compared with alternative suppression measures) emerge as serious constraints. Failure to appreciate the true nature of genetic approaches, such as the SIT, may pose a significant constraint to the wider adoption of the SIT and other genetically-based tactics, e.g. transgenic genetically modified organisms (GMOs). Lack of support for the necessary underpinning strategic research also appears to be an important constraint. Hence the case for extensive strategic research in ecology, population dynamics, genetics, and insect behaviour and nutrition is a compelling one. Raising the competitiveness of released sterile males remains the major research objective of the SIT. (author)
Directory of Open Access Journals (Sweden)
Luis Eduardo Gallego Vega
2010-05-01
Full Text Available This paper presents the results of research about the effect of transmission constraints on both expected electrical energy to be dispatched and power generation companies’ bidding strategies in the Colombian electrical power market. The proposed model simulates the national transmission grid and economic dispatch by means of optimal power flows. The proposed methodology allows structural problems in the power market to be analyzed due to the exclusive effect of trans- mission constraints and the mixed effect of bidding strategies and transmission networks. A new set of variables is proposed for quantifying the impact of each generation company on system operating costs and the change in expected dispatched energy. A correlation analysis of these new variables is presented, revealing some interesting linearities in some generation companies’ bidding patterns.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Finding Deadlocks of Event-B Models by Constraint Solving
DEFF Research Database (Denmark)
Hallerstede, Stefan; Leuschel, Michael
we propose a constraint-based approach to nding deadlocks employing the ProB constraint solver to nd values for the constants and variables of formal models that describe a deadlocking state. We discuss the principles of the technique implemented in ProB's Prolog kernel and present some results...
Parametric Linear Dynamic Logic
Directory of Open Access Journals (Sweden)
Peter Faymonville
2014-08-01
Full Text Available We introduce Parametric Linear Dynamic Logic (PLDL, which extends Linear Dynamic Logic (LDL by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL that is able to express all ω-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic Büchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all ω-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic Büchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.
Perspectives on large linear colliders
International Nuclear Information System (INIS)
Richter, B.
1987-11-01
Three main items in the design of large linear colliders are presented. The first is the interrelation of energy and luminosity requirements. These two items impose severe constraints on the accelerator builder who must design a machine to meet the needs of experimentl high energy physics rather than designing a machine for its own sake. An introduction is also given for linear collider design, concentrating on what goes on at the collision point, for still another constraint comes here from the beam-beam interaction which further restricts the choices available to the accelerator builder. The author also gives his impressions of the state of the technology available for building these kinds of machines within the next decade. The paper concludes with a brief recommendation for how we can all get on with the work faster, and hope to realize these machines sooner by working together. 10 refs., 9 figs
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
International Nuclear Information System (INIS)
Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria
2005-01-01
The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography
Psychological constraints on egalitarianism
DEFF Research Database (Denmark)
Kasperbauer, Tyler Joshua
2015-01-01
processes motivating people to resist various aspects of egalitarianism. I argue for two theses, one normative and one descriptive. The normative thesis holds that egalitarians must take psychological constraints into account when constructing egalitarian ideals. I draw from non-ideal theories in political...... philosophy, which aim to construct moral goals with current social and political constraints in mind, to argue that human psychology must be part of a non-ideal theory of egalitarianism. The descriptive thesis holds that the most fundamental psychological challenge to egalitarian ideals comes from what......Debates over egalitarianism for the most part are not concerned with constraints on achieving an egalitarian society, beyond discussions of the deficiencies of egalitarian theory itself. This paper looks beyond objections to egalitarianism as such and investigates the relevant psychological...
Reduction Of Constraints For Coupled Operations
International Nuclear Information System (INIS)
Raszewski, F.; Edwards, T.
2009-01-01
The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated much of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al 2 O 3 (ge) 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% (ΣM 2 O 2 O 3 constraint to 4 wt% (Al 2 O 3 (ge) 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al 2 O 3 and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of
Banach, S
1987-01-01
This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.
Least Squares Problems with Absolute Quadratic Constraints
Directory of Open Access Journals (Sweden)
R. Schöne
2012-01-01
Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.
Faddeev-Jackiw quantization and constraints
International Nuclear Information System (INIS)
Barcelos-Neto, J.; Wotzasek, C.
1992-01-01
In a recent Letter, Faddeev and Jackiw have shown that the reduction of constrained systems into its canonical, first-order form, can bring some new insight into the research of this field. For sympletic manifolds the geometrical structure, called Dirac or generalized bracket, is obtained directly from the inverse of the nonsingular sympletic two-form matrix. In the cases of nonsympletic manifolds, this two-form is degenerated and cannot be inverted to provide the generalized brackets. This singular behavior of the sympletic matrix is indicative of the presence of constraints that have to be carefully considered to yield to consistent results. One has two possible routes to treat this problem: Dirac has taught us how to implement the constraints into the potential part (Hamiltonian) of the canonical Lagrangian, leading to the well-known Dirac brackets, which are consistent with the constraints and can be mapped into quantum commutators (modulo ordering terms). The second route, suggested by Faddeev and Jackiw, and followed in this paper, is to implement the constraints directly into the canonical part of the first order Lagrangian, using the fact that the consistence condition for the stability of the constrained manifold is linear in the time derivative. This algorithm may lead to an invertible two-form sympletic matrix from where the Dirac brackets are readily obtained. This algorithm is used in this paper to investigate some aspects of the quantization of constrained systems with first- and second-class constraints in the sympletic approach
Métodos do tipo dual simplex para problemas de otimização linear canalizados
Directory of Open Access Journals (Sweden)
Ricardo Silveira Sousa
2005-12-01
Full Text Available Neste artigo estudamos o problema de otimização linear canalizado (restrições e variáveis canalizadas, chamado formato geral e desenvolvemos métodos do tipo dual simplex explorando o problema dual, o qual é linear por partes, num certo sentido não-linear. Várias alternativas de busca unidimensional foram examinadas. Experimentos computacionais revelam que a busca unidimensional exata na direção dual simplex apresenta melhor desempenho.In this paper we study the linear optimization problem lower and upper constrained (i.e., there are lower and upper bounds on constraints and variables and develop dual simplex methods that explore the dual problem, which is piecewise linear, in some sense nonlinear. Different one-dimensional searches were examined. Computational experiments showed that the exact one-dimensional search in the dual simplex direction has the best performance.
Constraint-based scheduling applying constraint programming to scheduling problems
Baptiste, Philippe; Nuijten, Wim
2001-01-01
Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Linear quadratic optimization for positive LTI system
Muhafzan, Yenti, Syafrida Wirma; Zulakmal
2017-05-01
Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.
International Nuclear Information System (INIS)
Alwis, S.P. de
2016-01-01
We discuss constraints on KKLT/KKLMMT and LVS scenarios that use anti-branes to get an uplift to a deSitter vacuum, coming from requiring the validity of an effective field theory description of the physics. We find these are not always satisfied or are hard to satisfy.
Ecosystems emerging. 5: Constraints
Czech Academy of Sciences Publication Activity Database
Patten, B. C.; Straškraba, Milan; Jorgensen, S. E.
2011-01-01
Roč. 222, č. 16 (2011), s. 2945-2972 ISSN 0304-3800 Institutional research plan: CEZ:AV0Z50070508 Keywords : constraint * epistemic * ontic Subject RIV: EH - Ecology, Behaviour Impact factor: 2.326, year: 2011 http://www.sciencedirect.com/science/article/pii/S0304380011002274
DEFF Research Database (Denmark)
Dove, Graham; Biskjær, Michael Mose; Lundqvist, Caroline Emilie
2017-01-01
groups of students building three models each. We studied groups building with traditional plastic bricks and also using a digital environment. The building tasks students undertake, and our subsequent analysis, are informed by the role constraints and ambiguity play in creative processes. Based...
Linear programming: an alternative approach for developing formulations for emergency food products.
Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra
2018-03-01
To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
An adaptive ES with a ranking based constraint handling strategy
Directory of Open Access Journals (Sweden)
Kusakci Ali Osman
2014-01-01
Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Einstein constraints in the Yang-Mills form
International Nuclear Information System (INIS)
Ashtekar, A.
1987-01-01
It is pointed out that constraints of Einstein's theory play a powerful role in both classical and quantum theory because they generate motions in spacetime, rather than in an internal space. New variables are then introduced on the Einstein phase space in terms of which constraints simplify considerably. In particular, the use of these variables enables one to imbed the constraint surface of Einstein's theory into that of Yang-Mills. The imbedding suggests new lines of attack to a number of problems in classical and quantum gravity and provides new concepts and tools to investigate the microscopic structure of space-time geometry
Constraints on stellar evolution from pulsations
International Nuclear Information System (INIS)
Cox, A.N.
1983-01-01
Consideration of the many types of intrinsic variable stars, that is, those that pulsate, reveals that perhaps a dozen classes can indicate some constraints that affect the results of stellar evolution calculations, or some interpretations of observations. Many of these constraints are not very strong or may not even be well defined yet. In this review we discuss only the case for six classes: classical Cepheids with their measured Wesselink radii, the observed surface effective temperatures of the known eleven double-mode Cepheids, the pulsation periods and measured surface effective temperatures of three R CrB variables, the delta Scuti variable VZ Cnc with a very large ratio of its two observed periods, the nonradial oscillations of our sun, and the period ratios of the newly discovered double-mode RR Lyrae variables. Unfortunately, the present state of knowledge about the exact compositions; mass loss and its dependence on the mass, radius, luminosity, and composition; ;and internal mixing processes, as well as sometimes the more basic parameters such as luminosities and surface effective temperatures prevent us from applying strong constraints for every case where currently the possibility exists
Level-Set Topology Optimization with Aeroelastic Constraints
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Primordial black holes in linear and non-linear regimes
Energy Technology Data Exchange (ETDEWEB)
Allahyari, Alireza; Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)
2017-06-01
We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we argue that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.
Directory of Open Access Journals (Sweden)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Distance Constraint Satisfaction Problems
Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael
We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.
An approach for solving linear fractional programming problems ...
African Journals Online (AJOL)
The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebraically using the concept of duality ...
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Quantum centipedes with strong global constraint
Grange, Pascal
2017-06-01
A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N + 1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.
Deepening Contractions and Collateral Constraints
DEFF Research Database (Denmark)
Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano
and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...
New constraints for canonical general relativity
International Nuclear Information System (INIS)
Reisenberger, M.P.
1995-01-01
Ashtekar's canonical theory of classical complex Euclidean GR (no Lorentzian reality conditions) is found to be invariant under the full algebra of infinitesimal 4-diffeomorphisms, but non-invariant under some finite proper 4-diffeos when the densitized dreibein, E a i , is degenerate. The breakdown of 4-diffeo invariance appears to be due to the inability of the Ashtekar Hamiltonian to generate births and deaths of E flux loops (leaving open the possibility that a new 'causality condition' forbidding the birth of flux loops might justify the non-invariance of the theory).A fully 4-diffeo invariant canonical theory in Ashtekar's variables, derived from Plebanski's action, is found to have constraints that are stronger than Ashtekar's for rank E< 2. The corresponding Hamiltonian generates births and deaths of E flux loops.It is argued that this implies a finite amplitude for births and deaths of loops in the physical states of quantum GR in the loop representation, thus modifying this (partly defined) theory substantially.Some of the new constraints are second class, leading to difficulties in quantization in the connection representation. This problem might be overcome in a very nice way by transforming to the classical loop variables, or the 'Faraday line' variables of Newman and Rovelli, and then solving the offending constraints.Note that, though motivated by quantum considerations, the present paper is classical in substance. (orig.)
Linear Temporal Logic-based Mission Planning
Anil Kumar; Rahul Kala
2016-01-01
In this paper, we describe the Linear Temporal Logic-based reactive motion planning. We address the problem of motion planning for mobile robots, wherein the goal specification of planning is given in complex environments. The desired task specification may consist of complex behaviors of the robot, including specifications for environment constraints, need of task optimality, obstacle avoidance, rescue specifications, surveillance specifications, safety specifications, etc. We use Linear Tem...
A Primal-Dual Interior Point-Linear Programming Algorithm for MPC
DEFF Research Database (Denmark)
Edlund, Kristian; Sokoler, Leo Emil; Jørgensen, John Bagterp
2009-01-01
Constrained optimal control problems for linear systems with linear constraints and an objective function consisting of linear and l1-norm terms can be expressed as linear programs. We develop an efficient primal-dual interior point algorithm for solution of such linear programs. The algorithm...
Directory of Open Access Journals (Sweden)
Aamir Hussain
2016-06-01
Full Text Available This paper presents the design optimization of linear permanent magnet (PM generator for wave energy conversion using finite element method (FEM. A linear PM generator with triangular-shaped magnet is proposed, which has higher electromagnetic characteristics, superior performance and low weight as compared to conventional linear PM generator with rectangular shaped magnet. The Individual Parameter (IP optimization technique is employed in order to optimize and achieve optimum performance of linear PM generator. The objective function, optimization variables; magnet angle,M_θ(∆ (θ, the pole-width ratio, P_w ratio(τ_p/τ_mz,, and split ratio between translator and stator, δ_a ratio(R_m/R_e, and constraints are defined. The efficiency and its main parts; copper and iron loss are computed using time-stepping FEM. The optimal values after optimization are presented which yields highest efficiency. Key
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Optimal Stopping with Information Constraint
International Nuclear Information System (INIS)
Lempa, Jukka
2012-01-01
We study the optimal stopping problem proposed by Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002). In this maximization problem of the expected present value of the exercise payoff, the underlying dynamics follow a linear diffusion. The decision maker is not allowed to stop at any time she chooses but rather on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002), solve this problem in the case where the underlying is a geometric Brownian motion and the payoff function is of American call option type. In the current study, we propose a mild set of conditions (covering the setup of Dupuis and Wang in Adv. Appl. Probab. 34:141–157, 2002) on both the underlying and the payoff and build and use a Markovian apparatus based on the Bellman principle of optimality to solve the problem under these conditions. We also discuss the interpretation of this model as optimal timing of an irreversible investment decision under an exogenous information constraint.
Introducing Linear Functions: An Alternative Statistical Approach
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
General Constraints on Sampling Wildlife on FIA Plots
Larissa L. Bailey; John R. Sauer; James D. Nichols; Paul H. Geissler
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species...
Learning With Mixed Hard/Soft Pointwise Constraints.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-09-01
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Searching for genomic constraints
Energy Technology Data Exchange (ETDEWEB)
Lio` , P [Cambridge, Univ. (United Kingdom). Genetics Dept.; Ruffo, S [Florence, Univ. (Italy). Fac. di Ingegneria. Dipt. di Energetica ` S. Stecco`
1998-01-01
The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call `genomic constraints` from the rules that depend on the `external natural selection` acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour.
Searching for genomic constraints
International Nuclear Information System (INIS)
Lio', P.; Ruffo, S.
1998-01-01
The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call 'genomic constraints' from the rules that depend on the 'external natural selection' acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour
Analysis of Linear Hybrid Systems in CLP
DEFF Research Database (Denmark)
Banda, Gourinath; Gallagher, John Patrick
2009-01-01
In this paper we present a procedure for representing the semantics of linear hybrid automata (LHAs) as constraint logic programs (CLP); flexible and accurate analysis and verification of LHAs can then be performed using generic CLP analysis and transformation tools. LHAs provide an expressive...
Non-linear Capital Taxation Without Commitment
Emmanuel Farhi; Christopher Sleet; Iván Werning; Sevin Yeltekin
2012-01-01
We study efficient non-linear taxation of labour and capital in a dynamic Mirrleesian model incorporating political economy constraints. Policies are chosen sequentially over time, without commitment. Our main result is that the marginal tax on capital income is progressive, in the sense that richer agents face higher marginal tax rates. Copyright , Oxford University Press.
Linear contextual modal type theory
DEFF Research Database (Denmark)
Schack-Nielsen, Anders; Schürmann, Carsten
Abstract. When one implements a logical framework based on linear type theory, for example the Celf system [?], one is immediately con- fronted with questions about their equational theory and how to deal with logic variables. In this paper, we propose linear contextual modal type theory that gives...... a mathematical account of the nature of logic variables. Our type theory is conservative over intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. Our main contributions include a mechanically checked proof of soundness and a working implementation....
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
An Approach for Solving Linear Fractional Programming Problems
Andrew Oyakhobo Odior
2012-01-01
Linear fractional programming problems are useful tools in production planning, financial and corporate planning, health care and hospital planning and as such have attracted considerable research interest. The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebr...
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Constraints on Gauge Field Production during Inflation
DEFF Research Database (Denmark)
Nurmi, Sami; Sloth, Martin Snoager
2014-01-01
In order to gain new insights into the gauge field couplings in the early universe, we consider the constraints on gauge field production during inflation imposed by requiring that their effect on the CMB anisotropies are subdominant. In particular, we calculate systematically the bispectrum...... of the primordial curvature perturbation induced by the presence of vector gauge fields during inflation. Using a model independent parametrization in terms of magnetic non-linearity parameters, we calculate for the first time the contribution to the bispectrum from the cross correlation between the inflaton...
Linear polarization of BY Draconis
International Nuclear Information System (INIS)
Koch, R.H.; Pfeiffer, R.J.
1976-01-01
Linear polarization measurements are reported in four bandpasses for the flare star BY Dra. The red polarization is intrinsically variable at a confidence level greater than 99 percent. On a time scale of many months, the variability is not phase-locked to either a rotational or a Keplerian ephemeris. The observations of the three other bandpasses are useful principally to indicate a polarization spectrum rising toward shorter wavelengths
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Understanding Brown Dwarf Variability
Marley, Mark S.
2013-01-01
Surveys of brown dwarf variability continue to find that roughly half of all brown dwarfs are variable. While variability is observed amongst all types of brown dwarfs, amplitudes are typically greatest for L-T transition objects. In my talk I will discuss the possible physical mechanisms that are responsible for the observed variability. I will particularly focus on comparing and contrasting the effects of changes in atmospheric thermal profile and cloud opacity. The two different mechanisms will produce different variability signatures and I will discuss the extent to which the current datasets constrain both mechanisms. By combining constraints from studies of variability with existing spectral and photometric datasets we can begin to construct and test self-consistent models of brown dwarf atmospheres. These models not only aid in the interpretation of existing objects but also inform studies of directly imaged giant planets.
Supergravity constraints on monojets
International Nuclear Information System (INIS)
Nandi, S.
1986-01-01
In the standard model, supplemented by N = 1 minimal supergravity, all the supersymmetric particle masses can be expressed in terms of a few unknown parameters. The resulting mass relations, and the laboratory and the cosmological bounds on these superpartner masses are used to put constraints on the supersymmetric origin of the CERN monojets. The latest MAC data at PEP excludes the scalar quarks, of masses up to 45 GeV, as the origin of these monojets. The cosmological bounds, for a stable photino, excludes the mass range necessary for the light gluino-heavy squark production interpretation. These difficulties can be avoided by going beyond the minimal supergravity theory. Irrespective of the monojets, the importance of the stable γ as the source of the cosmological dark matter is emphasized
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Valencia, Frank Dan
Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...
Algorithms and ordering heuristics for distributed constraint satisfaction problems
Wahbi , Mohamed
2013-01-01
DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
Linear programming based on neural networks for radiotherapy treatment planning
International Nuclear Information System (INIS)
Xingen Wu; Limin Luo
2000-01-01
In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)
Controlling attribute effect in linear regression
Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang
2013-01-01
In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.
Controlling attribute effect in linear regression
Calders, Toon
2013-12-01
In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.
Ostebee, Heath Michael; Ziminsky, Willy Steve; Johnson, Thomas Edward; Keener, Christopher Paul
2017-01-17
The present application provides a variable volume combustor for use with a gas turbine engine. The variable volume combustor may include a liner, a number of micro-mixer fuel nozzles positioned within the liner, and a linear actuator so as to maneuver the micro-mixer fuel nozzles axially along the liner.
Minimal Flavor Constraints for Technicolor
DEFF Research Database (Denmark)
Sakuma, Hidenori; Sannino, Francesco
2010-01-01
We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...
Social Constraints on Animate Vision
National Research Council Canada - National Science Library
Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian
2000-01-01
.... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...
Perspectives on large Linear Colliders
International Nuclear Information System (INIS)
Richter, B.
1987-01-01
The accelerator community now generally agrees that the Linear Collider is the most cost-effective technology for reaching much higher energies in the center-of-mass than can be attained in the largest of the e + e - storage rings, LEP. Indeed, even as the first linear collider, the SLC at SLAC, is getting ready to begin operations groups, at SLAC, Novosibirsk, CERN and KEK are doing R and D and conceptual design studies on a next generation machine in the 1 TeV energy region. In this perspectives talk I do not want to restrict my comments to any particular design, and so I will talk about a high-energy machine as the NLC, which is shorthand for the Next Linear Collider, and taken to mean a machine with a center-of-mass energy someplace in the 0.5 to 2 TeV energy range with sufficient luminosity to carry out a meaningful experimental program. I want to discuss three main items with you. The first is the interrelation of energy and luminosity requirements. These two items impose severe constraints on the accelerator builder. Next, I will give an introduction to linear collider design, concentrating on what goes on at the collision point, for still another constraint comes here from the beam-beam interaction which further restricts the choices available to the accelerator builder.Then, I want to give my impressions of the state of the technology available for building these kinds of machines within the next decade
Modifier constraint in alkali borophosphate glasses using topological constraint theory
Energy Technology Data Exchange (ETDEWEB)
Li, Xiang [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zeng, Huidan, E-mail: hdzeng@ecust.edu.cn [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Jiang, Qi [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zhao, Donghui [Unifrax Corporation, Niagara Falls, NY 14305 (United States); Chen, Guorong [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Wang, Zhaofeng; Sun, Luyi [Department of Chemical & Biomolecular Engineering and Polymer Program, Institute of Materials Science, University of Connecticut, Storrs, CT 06269 (United States); Chen, Jianding [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China)
2016-12-01
In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.
Source Coding in Networks with Covariance Distortion Constraints
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2016-01-01
results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Robust synergetic control design under inputs and states constraints
Rastegar, Saeid; Araújo, Rui; Sadati, Jalil
2018-03-01
In this paper, a novel robust-constrained control methodology for discrete-time linear parameter-varying (DT-LPV) systems is proposed based on a synergetic control theory (SCT) approach. It is shown that in DT-LPV systems without uncertainty, and for any unmeasured bounded additive disturbance, the proposed controller accomplishes the goal of stabilising the system by asymptotically driving the error of the controlled variable to a bounded set containing the origin and then maintaining it there. Moreover, given an uncertain DT-LPV system jointly subject to unmeasured and constrained additive disturbances, and constraints in states, input commands and reference signals (set points), then invariant set theory is used to find an appropriate polyhedral robust invariant region in which the proposed control framework is guaranteed to robustly stabilise the closed-loop system. Furthermore, this is achieved even for the case of varying non-zero control set points in such uncertain DT-LPV systems. The controller is characterised to have a simple structure leading to an easy implementation, and a non-complex design process. The effectiveness of the proposed method and the implications of the controller design on feasibility and closed-loop performance are demonstrated through application examples on the temperature control on a continuous-stirred tank reactor plant, on the control of a real-coupled DC motor plant, and on an open-loop unstable system example.
Ashtekar formalism with real variables
International Nuclear Information System (INIS)
Kalau, W.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica
1990-12-01
A new approach to canonical gravity is presented which is based on the Ashtekar formalism. But, in contrast to Ashtekar's variables, this formulation does not need complex quantities nor does it lead to second class constraints. This is achieved using SO(3,1) as a gauge group instead of complexified SO(3). Because of the larger group additional first class constraints are needed which turn out to be cubic and quartic in the momenta. (author). 13 refs
Electrometry - constraints and benefits
International Nuclear Information System (INIS)
Sabol, J.
1980-01-01
The main parameters are defined and described of an electrometer, including input resistance, input quiescent current, current noise equivalent, voltage and current stability, minimum input capacity, response time, time constant, range, accuracy, linearity, a-c component suppression, and zero drift. The limiting factors in measurement mainly include temperature noise, insulator quality, radioactivity background, electrostatic and electromagnetic interference, contact potential difference, and resistor stability. Electrometers are classified into three basic groups, viz., electrostatic electrometers, d-c amplifier-based electrometers (electron tube electrometers and FET electrometers), electrometers with modulation of measured signal (electrometers using vibration capacitors, electrometers with varactors). Diagrams and specifications are presented for selected electrometers. (J.B.)
SUSY Without Prejudice at Linear Colliders
International Nuclear Information System (INIS)
Rizzo, T.
2008-01-01
We explore the physics of the general CP-conserving MSSM with Minimal Flavor Violation, the pMSSM. The 19 soft SUSY breaking parameters are chosen so to satisfy all existing experimental and theoretical constraints assuming that the WIMP is the lightest neutralino. We scan this parameter space twice using both flat and log priors and compare the results which yield similar conclusions. Constraints from both LEP and the Tevatron play an important role in obtaining our final model samples. Implications for future TeV-scale e + e - linear colliders (LC) are discussed
Hamiltonian analysis for linearly acceleration-dependent Lagrangians
Energy Technology Data Exchange (ETDEWEB)
Cruz, Miguel, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Gómez-Cortés, Rosario, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Rojas, Efraín, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx [Facultad de Física, Universidad Veracruzana, 91000 Xalapa, Veracruz, México (Mexico); Molgado, Alberto, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx [Facultad de Ciencias, Universidad Autónoma de San Luis Potosí, Avenida Salvador Nava S/N Zona Universitaria, CP 78290 San Luis Potosí, SLP, México (Mexico)
2016-06-15
We study the constrained Ostrogradski-Hamilton framework for the equations of motion provided by mechanical systems described by second-order derivative actions with a linear dependence in the accelerations. We stress out the peculiar features provided by the surface terms arising for this type of theories and we discuss some important properties for this kind of actions in order to pave the way for the construction of a well defined quantum counterpart by means of canonical methods. In particular, we analyse in detail the constraint structure for these theories and its relation to the inherent conserved quantities where the associated energies together with a Noether charge may be identified. The constraint structure is fully analyzed without the introduction of auxiliary variables, as proposed in recent works involving higher order Lagrangians. Finally, we also provide some examples where our approach is explicitly applied and emphasize the way in which our original arrangement results in propitious for the Hamiltonian formulation of covariant field theories.
Observational constraints on interstellar chemistry
International Nuclear Information System (INIS)
Winnewisser, G.
1984-01-01
The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)
Market segmentation using perceived constraints
Jinhee Jun; Gerard Kyle; Andrew Mowen
2008-01-01
We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...
Fixed Costs and Hours Constraints
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
An Introduction to 'Creativity Constraints'
DEFF Research Database (Denmark)
Onarheim, Balder; Biskjær, Michael Mose
2013-01-01
Constraints play a vital role as both restrainers and enablers in innovation processes by governing what the creative agent/s can and cannot do, and what the output can and cannot be. Notions of constraints are common in creativity research, but current contributions are highly dispersed due to n...
Constraint Programming for Context Comprehension
DEFF Research Database (Denmark)
Christiansen, Henning
2014-01-01
A close similarity is demonstrated between context comprehension, such as discourse analysis, and constraint programming. The constraint store takes the role of a growing knowledge base learned throughout the discourse, and a suitable con- straint solver does the job of incorporating new pieces...
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Correlation-based decimation in constraint satisfaction problems
International Nuclear Information System (INIS)
Higuchi, Saburo; Mezard, Marc
2010-01-01
We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.
Constraint-preserving boundary treatment for a harmonic formulation of the Einstein equations
Energy Technology Data Exchange (ETDEWEB)
Seiler, Jennifer; Szilagyi, Bela; Pollney, Denis; Rezzolla, Luciano [Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Golm (Germany)
2008-09-07
We present a set of well-posed constraint-preserving boundary conditions for a first-order in time, second-order in space, harmonic formulation of the Einstein equations. The boundary conditions are tested using robust stability, linear and nonlinear waves, and are found to be both less reflective and constraint preserving than standard Sommerfeld-type boundary conditions.
Constraint-preserving boundary treatment for a harmonic formulation of the Einstein equations
International Nuclear Information System (INIS)
Seiler, Jennifer; Szilagyi, Bela; Pollney, Denis; Rezzolla, Luciano
2008-01-01
We present a set of well-posed constraint-preserving boundary conditions for a first-order in time, second-order in space, harmonic formulation of the Einstein equations. The boundary conditions are tested using robust stability, linear and nonlinear waves, and are found to be both less reflective and constraint preserving than standard Sommerfeld-type boundary conditions
International Nuclear Information System (INIS)
Le Duff, J.
1987-12-01
The basic philosophy and performance and technical constraints of linear e + e - colliders at TeV energies are summarized. Collider luminosity, pinch effects due to beam interaction, beam-beam bremsstrahlung, and typical parameters for an e + e - linear collider are discussed. Accelerating structures, HF power sources, electron guns, positron production, and storage rings are considered [fr
Vocabulary Constraint on Texts
Directory of Open Access Journals (Sweden)
C. Sutarsyah
2008-01-01
Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.Â It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
International Nuclear Information System (INIS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-01-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevin equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O(N) non-linear σ model and it is shown that singular terms appearing in the improved Langevin equation cancel out the δ n (0) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of independent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalism. (orig.)
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
Sparse Linear Identifiable Multivariate Modeling
DEFF Research Database (Denmark)
Henao, Ricardo; Winther, Ole
2011-01-01
and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...
Correlations and Non-Linear Probability Models
DEFF Research Database (Denmark)
Breen, Richard; Holm, Anders; Karlson, Kristian Bernt
2014-01-01
the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....
A Direct Heuristic Algorithm for Linear Programming
Indian Academy of Sciences (India)
Abstract. An (3) mathematically non-iterative heuristic procedure that needs no artificial variable is presented for solving linear programming problems. An optimality test is included. Numerical experiments depict the utility/scope of such a procedure.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Machine tongues. X. Constraint languages
Energy Technology Data Exchange (ETDEWEB)
Levitt, D.
Constraint languages and programming environments will help the designer produce a lucid description of a problem domain, and then of particular situations and problems in it. Early versions of these languages were given descriptions of real world domain constraints, like the operation of electrical and mechanical parts. More recently, the author has automated a vocabulary for describing musical jazz phrases, using constraint language as a jazz improviser. General constraint languages will handle all of these domains. Once the model is in place, the system will connect built-in code fragments and algorithms to answer questions about situations; that is, to help solve problems. Bugs will surface not in code, but in designs themselves. 15 references.
DEFF Research Database (Denmark)
Stolpe, Mathias; Bendsøe, Martin P.
2007-01-01
This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...
DEFF Research Database (Denmark)
Stolpe, Mathias; Bendsøe, Martin P.
2007-01-01
This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....
Orthogonal sparse linear discriminant analysis
Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun
2018-03-01
Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.
Quadratic Interpolation and Linear Lifting Design
Directory of Open Access Journals (Sweden)
Joel Solé
2007-03-01
Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.
Determination of regression laws: Linear and nonlinear
International Nuclear Information System (INIS)
Onishchenko, A.M.
1994-01-01
A detailed mathematical determination of regression laws is presented in the article. Particular emphasis is place on determining the laws of X j on X l to account for source nuclei decay and detector errors in nuclear physics instrumentation. Both linear and nonlinear relations are presented. Linearization of 19 functions is tabulated, including graph, relation, variable substitution, obtained linear function, and remarks. 6 refs., 1 tab
Fluid convection, constraint and causation
Bishop, Robert C.
2012-01-01
Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955
An efficient formulation for linear and geometric non-linear membrane elements
Directory of Open Access Journals (Sweden)
Mohammad Rezaiee-Pajand
Full Text Available Utilizing the straingradient notation process and the free formulation, an efficient way of constructing membrane elements will be proposed. This strategy can be utilized for linear and geometric non-linear problems. In the suggested formulation, the optimization constraints of insensitivity to distortion, rotational invariance and not having parasitic shear error are employed. In addition, the equilibrium equations will be established based on some constraints among the strain states. The authors' technique can easily separate the rigid body motions, and those belong to deformational motions. In this article, a novel triangular element, named SST10, is formulated. This element will be used in several plane problems having irregular mesh and complicated geometry with linear and geometrically nonlinear behavior. The numerical outcomes clearly demonstrate the efficiency of the new formulation.
Linear ideal MHD stability calculations for ITER
International Nuclear Information System (INIS)
Hogan, J.T.
1988-01-01
A survey of MHD stability limits has been made to address issues arising from the MHD--poloidal field design task of the US ITER project. This is a summary report on the results obtained to date. The study evaluates the dependence of ballooning, Mercier and low-n ideal linear MHD stability on key system parameters to estimate overall MHD constraints for ITER. 17 refs., 27 figs
Linear and integer programming made easy
Hu, T C
2016-01-01
Linear and integer programming are fundamental toolkits for data and information science and technology, particularly in the context of today’s megatrends toward statistical optimization, machine learning, and big data analytics. Drawn from over 30 years of classroom teaching and applied research experience, this textbook provides a crisp and practical introduction to the basics of linear and integer programming. The authors’ approach is accessible to students from all fields of engineering, including operations research, statistics, machine learning, control system design, scheduling, formal verification, and computer vision. Readers will learn to cast hard combinatorial problems as mathematical programming optimizations, understand how to achieve formulations where the objective and constraints are linear, choose appropriate solution methods, and interpret results appropriately. •Provides a concise introduction to linear and integer programming, appropriate for undergraduates, graduates, a short cours...
Cut elimination in multifocused linear logic
DEFF Research Database (Denmark)
Guenot, Nicolas; Brock-Nannestad, Taus
2015-01-01
We study cut elimination for a multifocused variant of full linear logic in the sequent calculus. The multifocused normal form of proofs yields problems that do not appear in a standard focused system, related to the constraints in grouping rule instances in focusing phases. We show that cut...... elimination can be performed in a sensible way even though the proof requires some specific lemmas to deal with multifocusing phases, and discuss the difficulties arising with cut elimination when considering normal forms of proofs in linear logic....
Linear Regression Based Real-Time Filtering
Directory of Open Access Journals (Sweden)
Misel Batmend
2013-01-01
Full Text Available This paper introduces real time filtering method based on linear least squares fitted line. Method can be used in case that a filtered signal is linear. This constraint narrows a band of potential applications. Advantage over Kalman filter is that it is computationally less expensive. The paper further deals with application of introduced method on filtering data used to evaluate a position of engraved material with respect to engraving machine. The filter was implemented to the CNC engraving machine control system. Experiments showing its performance are included.
Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni
This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.
Mokhtarian, Patricia L.; Bagley, Michael N.; Salomon, Ilan
1998-01-01
Accurate forecasts of the adoption and impacts of telecommuting depend on an understanding of what motivates individuals to adopt telecommuting and what constraints prevent them from doing so, since these motivations and constraints offer insight into who is likely to telecommute under what circumstances. Telecommuting motivations and constraints are likely to differ by various segments of society. In this study, we analyze differences in these variables due to gender, occupation, and presenc...
Hyperbolicity and constrained evolution in linearized gravity
International Nuclear Information System (INIS)
Matzner, Richard A.
2005-01-01
Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations
New variables for classical and quantum gravity
Ashtekar, Abhay
1986-01-01
A Hamiltonian formulation of general relativity based on certain spinorial variables is introduced. These variables simplify the constraints of general relativity considerably and enable one to imbed the constraint surface in the phase space of Einstein's theory into that of Yang-Mills theory. The imbedding suggests new ways of attacking a number of problems in both classical and quantum gravity. Some illustrative applications are discussed.
Crab cavities for linear colliders
Burt, G; Carter, R; Dexter, A; Tahir, I; Beard, C; Dykes, M; Goudket, P; Kalinin, A; Ma, L; McIntosh, P; Shulte, D; Jones, Roger M; Bellantoni, L; Chase, B; Church, M; Khabouline, T; Latina, A; Adolphsen, C; Li, Z; Seryi, Andrei; Xiao, L
2008-01-01
Crab cavities have been proposed for a wide number of accelerators and interest in crab cavities has recently increased after the successful operation of a pair of crab cavities in KEK-B. In particular crab cavities are required for both the ILC and CLIC linear colliders for bunch alignment. Consideration of bunch structure and size constraints favour a 3.9 GHz superconducting, multi-cell cavity as the solution for ILC, whilst bunch structure and beam-loading considerations suggest an X-band copper travelling wave structure for CLIC. These two cavity solutions are very different in design but share complex design issues. Phase stabilisation, beam loading, wakefields and mode damping are fundamental issues for these crab cavities. Requirements and potential design solutions will be discussed for both colliders.
Cosmological Constraints on Mirror Matter Parameters
International Nuclear Information System (INIS)
Wallemacq, Quentin; Ciarcelluti, Paolo
2014-01-01
Up-to-date estimates of the cosmological parameters are presented as a result of numerical simulations of cosmic microwave background and large scale structure, considering a flat Universe in which the dark matter is made entirely or partly of mirror matter, and the primordial perturbations are scalar adiabatic and in linear regime. A statistical analysis using the Markov Chain Monte Carlo method allows to obtain constraints of the cosmological parameters. As a result, we show that a Universe with pure mirror dark matter is statistically equivalent to the case of an admixture with cold dark matter. The upper limits for the ratio of the temperatures of ordinary and mirror sectors are around 0.3 for both the cosmological models, which show the presence of a dominant fraction of mirror matter, 0.06≲Ω_m_i_r_r_o_rh"2≲0.12.
Comments on the nilpotent constraint of the goldstino superfield
Ghilencea, D M
2016-01-01
Superfield constraints were often used in the past, in particular to describe the Akulov-Volkov action of the goldstino by a superfield formulation with $L=(\\Phi^\\dagger \\Phi)_D + [(f\\Phi)_F + h.c.]$ endowed with the nilpotent constraint $\\Phi^2=0$ for the goldstino superfield ($\\Phi$). Inspired by this, such constraint is often used to define the goldstino superfield even in the presence of additional superfields, for example in models of "nilpotent inflation". In this review we show that the nilpotent property is not valid in general, under the assumption of a microscopic (ultraviolet) description of the theory with linear supermultiplets. Sometimes only weaker versions of the nilpotent relation are true such as $\\Phi^3=0$ or $\\Phi^4=0$ ($\\Phi^2\
Design of optimal linear antennas with maximally flat radiation patterns
Minkovich, B. M.; Mints, M. Ia.
1990-02-01
The paper presents an explicit solution to the problem of maximizing the aperture area utilization coefficient and obtaining the best approximation in the mean of the sectorial U-shaped radiation pattern of a linear antenna, when Butterworth flattening constraints are imposed on the approximating pattern. Constraints are established on the choice of the smallest and large antenna dimensions that make it possible to obtain maximally flat patterns, having a low sidelobe level and free from pulsations within the main lobe.
Self-scheduling and bidding strategies of thermal units with stochastic emission constraints
International Nuclear Information System (INIS)
Laia, R.; Pousinho, H.M.I.; Melíco, R.; Mendes, V.M.F.
2015-01-01
Highlights: • The management of thermal power plants is considered for different emission allowance levels. • The uncertainty on electricity price is considered by a set of scenarios. • A stochastic MILP approach allows devising the bidding strategies and hedging against price uncertainty and emission allowances. - Abstract: This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies
Directory of Open Access Journals (Sweden)
Hiroyuki Goto
2013-07-01
Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.
Application of linear logic to simulation
Clarke, Thomas L.
1998-08-01
Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.
The Impact of Credit Constraints on Housing Demand: Assessed with Endogenous Price and Expenditure
Li, Yarui; Leatham, David J.
2013-01-01
This article assesses the impact of credit constraints on housing demand with price and expenditure treated as endogenous variables. Using AIDS model, we find the model without controlling for endogeneities tends to underestimate the impact of credit constraint on the budget shares and the estimates are less significant.
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data
Constraint programming and decision making
Kreinovich, Vladik
2014-01-01
In many application areas, it is necessary to make effective decisions under constraints. Several area-specific techniques are known for such decision problems; however, because these techniques are area-specific, it is not easy to apply each technique to other applications areas. Cross-fertilization between different application areas is one of the main objectives of the annual International Workshops on Constraint Programming and Decision Making. Those workshops, held in the US (El Paso, Texas), in Europe (Lyon, France), and in Asia (Novosibirsk, Russia), from 2008 to 2012, have attracted researchers and practitioners from all over the world. This volume presents extended versions of selected papers from those workshops. These papers deal with all stages of decision making under constraints: (1) formulating the problem of multi-criteria decision making in precise terms, (2) determining when the corresponding decision problem is algorithmically solvable; (3) finding the corresponding algorithms, and making...
Directory of Open Access Journals (Sweden)
Deyin Yao
2014-01-01
Full Text Available This paper deals with the problem of robust model predictive control (RMPC for a class of linear time-varying systems with constraints and data losses. We take the polytopic uncertainties into account to describe the uncertain systems. First, we design a robust state observer by using the linear matrix inequality (LMI constraints so that the original system state can be tracked. Second, the MPC gain is calculated by minimizing the upper bound of infinite horizon robust performance objective in terms of linear matrix inequality conditions. The method of robust MPC and state observer design is illustrated by a numerical example.
A Planar Quasi-Static Constraint Mode Tire Model
2015-07-10
strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris
Gauge transformations in relativistic two-particle constraint theory
International Nuclear Information System (INIS)
Jallouli, H.; Sazdjian, H.
1996-01-01
The forms of the local potentials in linear covariant gauges are investigated and relationships are found between them. The gauge transformation properties of the Green's function and of the Bethe-Salpeter wave function are reviewed. The infinitesimal gauge transformation laws of the constraint theory wave functions and potentials are determined. The case of the local approximation of potentials is considered. The general properties of the gauge transformations in the local approximation are studied. (K.A.)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Mancarella, P.; Terreni, G.; Sadri, F.; Toni, F.; Endriss, U.
2009-01-01
We present the CIFF proof procedure for abductive logic programming with constraints, and we prove its correctness. CIFF is an extension of the IFF proof procedure for abductive logic programming, relaxing the original restrictions over variable quantification (allowedness conditions) and
Energy Technology Data Exchange (ETDEWEB)
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Directory of Open Access Journals (Sweden)
Xin-Jia Meng
2015-01-01
Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.
Linearity of holographic entanglement entropy
Energy Technology Data Exchange (ETDEWEB)
Almheiri, Ahmed [Stanford Institute for Theoretical Physics, Department of Physics,Stanford University, Stanford, CA 94305 (United States); Dong, Xi [School of Natural Sciences, Institute for Advanced Study,Princeton, NJ 08540 (United States); Swingle, Brian [Stanford Institute for Theoretical Physics, Department of Physics,Stanford University, Stanford, CA 94305 (United States)
2017-02-14
We consider the question of whether the leading contribution to the entanglement entropy in holographic CFTs is truly given by the expectation value of a linear operator as is suggested by the Ryu-Takayanagi formula. We investigate this property by computing the entanglement entropy, via the replica trick, in states dual to superpositions of macroscopically distinct geometries and find it consistent with evaluating the expectation value of the area operator within such states. However, we find that this fails once the number of semi-classical states in the superposition grows exponentially in the central charge of the CFT. Moreover, in certain such scenarios we find that the choice of surface on which to evaluate the area operator depends on the density matrix of the entire CFT. This nonlinearity is enforced in the bulk via the homology prescription of Ryu-Takayanagi. We thus conclude that the homology constraint is not a linear property in the CFT. We also discuss the existence of ‘entropy operators’ in general systems with a large number of degrees of freedom.
Stochastic population dynamics under resource constraints
Energy Technology Data Exchange (ETDEWEB)
Gavane, Ajinkya S., E-mail: ajinkyagavane@gmail.com; Nigam, Rahul, E-mail: rahul.nigam@hyderabad.bits-pilani.ac.in [BITS Pilani Hyderabad Campus, Shameerpet, Hyd - 500078 (India)
2016-06-02
This paper investigates the population growth of a certain species in which every generation reproduces thrice over a period of predefined time, under certain constraints of resources needed for survival of population. We study the survival period of a species by randomizing the reproduction probabilities within a window at same predefined ages and the resources are being produced by the working force of the population at a variable rate. This randomness in the reproduction rate makes the population growth stochastic in nature and one cannot predict the exact form of evolution. Hence we study the growth by running simulations for such a population and taking an ensemble averaged over 500 to 5000 such simulations as per the need. While the population reproduces in a stochastic manner, we have implemented a constraint on the amount of resources available for the population. This is important to make the simulations more realistic. The rate of resource production then is tuned to find the rate which suits the survival of the species. We also compute the mean life time of the species corresponding to different resource production rate. Study for these outcomes in the parameter space defined by the reproduction probabilities and rate of resource production is carried out.
A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models
Directory of Open Access Journals (Sweden)
Francesco Alessandro Massucci
2013-09-01
Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.
Constraint elimination in dynamical systems
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
Constraint Programming versus Mathematical Programming
DEFF Research Database (Denmark)
Hansen, Jesper
2003-01-01
Constraint Logic Programming (CLP) is a relatively new technique from the 80's with origins in Computer Science and Artificial Intelligence. Lately, much research have been focused on ways of using CLP within the paradigm of Operations Research (OR) and vice versa. The purpose of this paper...
Sterile neutrino constraints from cosmology
DEFF Research Database (Denmark)
Hamann, Jan; Hannestad, Steen; Raffelt, Georg G.
2012-01-01
The presence of light particles beyond the standard model's three neutrino species can profoundly impact the physics of decoupling and primordial nucleosynthesis. I review the observational signatures of extra light species, present constraints from recent data, and discuss the implications of po...... of possible sterile neutrinos with O(eV)-masses for cosmology....
Intertemporal consumption and credit constraints
DEFF Research Database (Denmark)
Leth-Petersen, Søren
2010-01-01
There is continuing controversy over the importance of credit constraints. This paper investigates whether total household expenditure and debt is affected by an exogenous increase in access to credit provided by a credit market reform that enabled Danish house owners to use housing equity...
Financial Constraints: Explaining Your Position.
Cargill, Jennifer
1988-01-01
Discusses the importance of educating library patrons about the library's finances and the impact of budget constraints and the escalating cost of serials on materials acquisition. Steps that can be taken in educating patrons by interpreting and publicizing financial information are suggested. (MES)
Optimal placement of capacitors in a radial network using conic and mixed integer linear programming
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2008-06-15
This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)
Constraints on the evolution of phenotypic plasticity
DEFF Research Database (Denmark)
Murren, Courtney J; Auld, Josh R.; Callahan, Hilary S
2015-01-01
Phenotypic plasticity is ubiquitous and generally regarded as a key mechanism for enabling organisms to survive in the face of environmental change. Because no organism is infinitely or ideally plastic, theory suggests that there must be limits (for example, the lack of ability to produce...... an optimal trait) to the evolution of phenotypic plasticity, or that plasticity may have inherent significant costs. Yet numerous experimental studies have not detected widespread costs. Explicitly differentiating plasticity costs from phenotype costs, we re-evaluate fundamental questions of the limits...... to the evolution of plasticity and of generalists vs specialists. We advocate for the view that relaxed selection and variable selection intensities are likely more important constraints to the evolution of plasticity than the costs of plasticity. Some forms of plasticity, such as learning, may be inherently...
Linearizing feedforward/feedback attitude control
Paielli, Russell A.; Bach, Ralph E.
1991-01-01
An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.
Final Focus Systems in Linear Colliders
International Nuclear Information System (INIS)
Raubenheimer, Tor
1998-01-01
In colliding beam facilities, the ''final focus system'' must demagnify the beams to attain the very small spot sizes required at the interaction points. The first final focus system with local chromatic correction was developed for the Stanford Linear Collider where very large demagnifications were desired. This same conceptual design has been adopted by all the future linear collider designs as well as the SuperConducting Supercollider, the Stanford and KEK B-Factories, and the proposed Muon Collider. In this paper, the over-all layout, physics constraints, and optimization techniques relevant to the design of final focus systems for high-energy electron-positron linear colliders are reviewed. Finally, advanced concepts to avoid some of the limitations of these systems are discussed
Search strategies in practice: Influence of information and task constraints.
Pacheco, Matheus M; Newell, Karl M
2018-01-01
The practice of a motor task has been conceptualized as a process of search through a perceptual-motor workspace. The present study investigated the influence of information and task constraints on the search strategy as reflected in the sequential relations of the outcome in a discrete movement virtual projectile task. The results showed that the relation between the changes of trial-to-trial movement outcome to performance level was dependent on the landscape of the task dynamics and the influence of inherent variability. Furthermore, the search was in a constrained parameter region of the perceptual-motor workspace that depended on the task constraints. These findings show that there is not a single function of trial-to-trial change over practice but rather that local search strategies (proportional, discontinuous, constant) adapt to the level of performance and the confluence of constraints to action. Copyright © 2017 Elsevier B.V. All rights reserved.
Stochastic linear programming models, theory, and computation
Kall, Peter
2011-01-01
This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
PWR control system design using advanced linear and non-linear methodologies
International Nuclear Information System (INIS)
Rabindran, N.; Whitmarsh-Everiss, M.J.
2004-01-01
Consideration is here given to the methodology deployed for non-linear heuristic analysis in the time domain supported by multi-variable linear control system design methods for the purposes of operational dynamics and control system analysis. This methodology is illustrated by the application of structural singular value μ analysis to Pressurised Water Reactor control system design. (author)
DEFF Research Database (Denmark)
Melo, Jean
. Although many researchers suggest that preprocessor-based variability amplifies maintenance problems, there is little to no hard evidence on how actually variability affects programs and programmers. Specifically, how does variability affect programmers during maintenance tasks (bug finding in particular......)? How much harder is it to debug a program as variability increases? How do developers debug programs with variability? In what ways does variability affect bugs? In this Ph.D. thesis, I set off to address such issues through different perspectives using empirical research (based on controlled...... experiments) in order to understand quantitatively and qualitatively the impact of variability on programmers at bug finding and on buggy programs. From the program (and bug) perspective, the results show that variability is ubiquitous. There appears to be no specific nature of variability bugs that could...
Directory of Open Access Journals (Sweden)
Aihong Ren
2016-01-01
Full Text Available This paper is concerned with a class of fully fuzzy bilevel linear programming problems where all the coefficients and decision variables of both objective functions and the constraints are fuzzy numbers. A new approach based on deviation degree measures and a ranking function method is proposed to solve these problems. We first introduce concepts of the feasible region and the fuzzy optimal solution of a fully fuzzy bilevel linear programming problem. In order to obtain a fuzzy optimal solution of the problem, we apply deviation degree measures to deal with the fuzzy constraints and use a ranking function method of fuzzy numbers to rank the upper and lower level fuzzy objective functions. Then the fully fuzzy bilevel linear programming problem can be transformed into a deterministic bilevel programming problem. Considering the overall balance between improving objective function values and decreasing allowed deviation degrees, the computational procedure for finding a fuzzy optimal solution is proposed. Finally, a numerical example is provided to illustrate the proposed approach. The results indicate that the proposed approach gives a better optimal solution in comparison with the existing method.
Non linear system become linear system
Directory of Open Access Journals (Sweden)
Petre Bucur
2007-01-01
Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.
Linear motor coil assembly and linear motor
2009-01-01
An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig; Al-Naffouri, Tareq Y.
2015-01-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Linear accelerator for radioisotope production
International Nuclear Information System (INIS)
Hansborough, L.D.; Hamm, R.W.; Stovall, J.E.
1982-02-01
A 200- to 500-μA source of 70- to 90-MeV protons would be a valuable asset to the nuclear medicine program. A linear accelerator (linac) can achieve this performance, and it can be extended to even higher energies and currents. Variable energy and current options are available. A 70-MeV linac is described, based on recent innovations in linear accelerator technology; it would be 27.3 m long and cost approx. $6 million. By operating the radio-frequency (rf) power system at a level necessary to produce a 500-μA beam current, the cost of power deposited in the radioisotope-production target is comparable with existing cyclotrons. If the rf-power system is operated at full power, the same accelerator is capable of producing an 1140-μA beam, and the cost per beam watt on the target is less than half that of comparable cyclotrons
Linear regression in astronomy. II
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Variability through the Eyes of the Programmer
DEFF Research Database (Denmark)
Melo, Jean; Batista Narcizo, Fabricio; Hansen, Dan Witzner
2017-01-01
Preprocessor directives (#ifdefs) are often used to implement compile-time variability, despite the critique that they increase complexity, hamper maintainability, and impair code comprehensibility. Previous studies have shown that the time of bug finding increases linearly with variability. Howe...
General quadratic gauge theory: constraint structure, symmetries and physical functions
Energy Technology Data Exchange (ETDEWEB)
Gitman, D M [Institute of Physics, University of Sao Paulo (Brazil); Tyutin, I V [Lebedev Physics Institute, Moscow (Russian Federation)
2005-06-17
How can we relate the constraint structure and constraint dynamics of the general gauge theory in the Hamiltonian formulation to specific features of the theory in the Lagrangian formulation, especially relate the constraint structure to the gauge transformation structure of the Lagrangian action? How can we construct the general expression for the gauge charge if the constraint structure in the Hamiltonian formulation is known? Whether we can identify the physical functions defined as commuting with first-class constraints in the Hamiltonian formulation and the physical functions defined as gauge invariant functions in the Lagrangian formulation? The aim of the present paper is to consider the general quadratic gauge theory and to answer the above questions for such a theory in terms of strict assertions. To fulfil such a programme, we demonstrate the existence of the so-called superspecial phase-space variables in terms of which the quadratic Hamiltonian action takes a simple canonical form. On the basis of such a representation, we analyse a functional arbitrariness in the solutions of the equations of motion of the quadratic gauge theory and derive the general structure of symmetries by analysing a symmetry equation. We then use these results to identify the two definitions of physical functions and thus prove the Dirac conjecture.
Finding the optimal Bayesian network given a constraint graph
Directory of Open Access Journals (Sweden)
Jacob M. Schreiber
2017-07-01
Full Text Available Despite recent algorithmic improvements, learning the optimal structure of a Bayesian network from data is typically infeasible past a few dozen variables. Fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. Several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. While super-structures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. We introduce the concept of a “constraint graph” as an intuitive method for incorporating rich prior knowledge into the structure learning task. We describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. In particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. These subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost.
de Mendoza, Guillermo; Ventura, Marc; Catalan, Jordi
2015-07-01
Aiming to elucidate whether large-scale dispersal factors or environmental species sorting prevail in determining patterns of Trichoptera species composition in mountain lakes, we analyzed the distribution and assembly of the most common Trichoptera (Plectrocnemia laetabilis, Polycentropus flavomaculatus, Drusus rectus, Annitella pyrenaea, and Mystacides azurea) in the mountain lakes of the Pyrenees (Spain, France, Andorra) based on a survey of 82 lakes covering the geographical and environmental extremes of the lake district. Spatial autocorrelation in species composition was determined using Moran's eigenvector maps (MEM). Redundancy analysis (RDA) was applied to explore the influence of MEM variables and in-lake, and catchment environmental variables on Trichoptera assemblages. Variance partitioning analysis (partial RDA) revealed the fraction of species composition variation that could be attributed uniquely to either environmental variability or MEM variables. Finally, the distribution of individual species was analyzed in relation to specific environmental factors using binomial generalized linear models (GLM). Trichoptera assemblages showed spatial structure. However, the most relevant environmental variables in the RDA (i.e., temperature and woody vegetation in-lake catchments) were also related with spatial variables (i.e., altitude and longitude). Partial RDA revealed that the fraction of variation in species composition that was uniquely explained by environmental variability was larger than that uniquely explained by MEM variables. GLM results showed that the distribution of species with longitudinal bias is related to specific environmental factors with geographical trend. The environmental dependence found agrees with the particular traits of each species. We conclude that Trichoptera species distribution and composition in the lakes of the Pyrenees are governed predominantly by local environmental factors, rather than by dispersal constraints. For
Creativity from Constraints in Engineering Design
DEFF Research Database (Denmark)
Onarheim, Balder
2012-01-01
This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...