Evaluating Distributed Timing Constraints
DEFF Research Database (Denmark)
Kristensen, C.H.; Drejer, N.
1994-01-01
In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....
Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software
DEFF Research Database (Denmark)
Drejer, N.; Kristensen, C.H.
1993-01-01
This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......
Bayesian Model Selection under Time Constraints
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Notes on Timed Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia, Frank D.
2004-01-01
and program reactive systems. This note provides a comprehensive introduction to the background for and central notions from the theory of tccp. Furthermore, it surveys recent results on a particular tccp calculus, ntcc, and it provides a classification of the expressive power of various tccp languages.......A constraint is a piece of (partial) information on the values of the variables of a system. Concurrent constraint programming (ccp) is a model of concurrency in which agents (also called processes) interact by telling and asking information (constraints) to and from a shared store (a constraint...
Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment
DEFF Research Database (Denmark)
Kristensen, C. H.; Drejer, N.
1994-01-01
In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
On quantization of time-dependent systems with constraints
International Nuclear Information System (INIS)
Gadjiev, S A; Jafarov, R G
2007-01-01
The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity
On quantization of time-dependent systems with constraints
International Nuclear Information System (INIS)
Hadjialieva, F.G.; Jafarov, R.G.
1993-07-01
The Dirac method of canonical quantization of theories with second class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from physical equivalent theory with trivial nonstationarity. (author). 4 refs
On quantization of time-dependent systems with constraints
Energy Technology Data Exchange (ETDEWEB)
Gadjiev, S A; Jafarov, R G [Institute for Physical Problems, Baku State University, AZ11 48 Baku (Azerbaijan)
2007-03-30
The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity.
Constraint Logic Programming for Resolution of Relative Time Expressions
DEFF Research Database (Denmark)
Christiansen, Henning
2014-01-01
Translating time expression into absolute time points or durations is a challenge for natural languages processing such as text mining and text understanding in general. We present a constraint logic language CLP(Time) tailored to text usages concerned with time and calendar. It provides a simple...... and flexible formalism to express relationships between different time expressions in a text, thereby giving a recipe for resolving them into absolute time. A constraint solver is developed which, as opposed to some earlier approaches, is independent of the order in which temporal information is introduced...
Composing Synchronisation and Real-Time Constraints
Bergmans, Lodewijk; Aksit, Mehmet
There have been a number of publications illustrating the successes of object-oriented techniques in creating highly reusable software systems. Several concurrent languages have been proposed for specifying reusable synchronization specifications. Recently, a number of real-time object-oriented
Periodic capacity management under a lead-time performance constraint
Büyükkaramikli, N.C.; Bertrand, J.W.M.; Ooijen, van H.P.G.
2013-01-01
In this paper, we study a production system that operates under a lead-time performance constraint which guarantees the completion of an order before a pre-determined lead-time with a certain probability. The demand arrival times and the service requirements for the orders are random. To reduce the
Gangadharan, Sridhar
2013-01-01
This book serves as a hands-on guide to timing constraints in integrated circuit design. Readers will learn to maximize performance of their IC designs, by specifying timing requirements correctly. Coverage includes key aspects of the design flow impacted by timing constraints, including synthesis, static timing analysis and placement and routing. Concepts needed for specifying timing requirements are explained in detail and then applied to specific stages in the design flow, all within the context of Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints. · Provides a hands-on guide to synthesis and timing analysis, using Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints; · Includes key topics of interest to a synthesis, static timing analysis or place and route engineer; · Explains which constraints command to use for ease of maintenance and reuse, given several options pos...
Implementering Run-time Evaluation of Distributed Timing Constraints in a Micro Kernel
DEFF Research Database (Denmark)
Kristensen, C.H.; Drejer, N.; Nielsen, Jens Frederik Dalsgaard
In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems......In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems...
Time constraints and autonomy at work in the European Union
Dhondt, S.
1998-01-01
Time constraints and job autonomy are seen as two major dimensions of work content. These two dimensions play a major role in controlling psychosocial stress at work. The European Foundation for the Improvement of Living and Working Conditions (EFILWC) has asked NIA TNO to prepare a report on time
Departure time choice: Modelling individual preferences, intention and constraints
DEFF Research Database (Denmark)
Thorhauge, Mikkel
by nearly all studies within departure time. More importantly it shows that the underlying psychological processes are more complex than simply accounting for attitudes and perceptions which are typically used in other areas. The work in this PhD thesis accounts for the full Theory of Planned Behaviour......, but can also be perceived by the individuals as barriers towards participating in activities. Perceived constraints affect the departure time choice through the individual intention of being on time. This PhD thesis also contributes to the departure time literature by discussing the problem of collecting...... whether they are constrained. The thesis also provides empirical evidences of the policy implication of not accounting for other activities and their constraints. Thirdly, the thesis shows that the departure time choice can be partly explained by psychological factors, which have previously been neglected...
Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model
Li, Weidong; Rukavina, Paul
2012-01-01
In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…
Automatic Verification of Timing Constraints for Safety Critical Space Systems
Fernandez, Javier; Parra, Pablo; Sanchez Prieto, Sebastian; Polo, Oscar; Bernat, Guillem
2015-09-01
In this paper is presented an automatic process of verification. We focus in the verification of scheduling analysis parameter. This proposal is part of process based on Model Driven Engineering to automate a Verification and Validation process of the software on board of satellites. This process is implemented in a software control unit of the energy particle detector which is payload of Solar Orbiter mission. From the design model is generated a scheduling analysis model and its verification model. The verification as defined as constraints in way of Finite Timed Automatas. When the system is deployed on target the verification evidence is extracted as instrumented points. The constraints are fed with the evidence, if any of the constraints is not satisfied for the on target evidence the scheduling analysis is not valid.
Thinking aloud in the presence of interruptions and time constraints
DEFF Research Database (Denmark)
Hertzum, Morten; Holmegaard, Kristin Due
2013-01-01
and time constraints, two frequent elements of real-world activities. We find that the presence of auditory, visual, audiovisual, or no interruptions interacts with thinking aloud for task solution rate, task completion time, and participants’ fixation rate. Thinking-aloud participants also spend longer......Thinking aloud is widely used for usability evaluation and its reactivity is therefore important to the quality of evaluation results. This study investigates whether thinking aloud (i.e., verbalization at levels 1 and 2) affects the behaviour of users who perform tasks that involve interruptions...... responding to interruptions than control participants. Conversely, the absence or presence of time constraints does not interact with thinking aloud, suggesting that time pressure is less likely to make thinking aloud reactive than previously assumed. Our results inform practitioners faced with the decision...
Geometry and dynamics with time-dependent constraints
Evans, Jonathan M.; Jonathan M Evans; Philip A Tuckey
1995-01-01
We describe how geometrical methods can be applied to a system with explicitly time-dependent second-class constraints so as to cast it in Hamiltonian form on its physical phase space. Examples of particular interest are systems which require time-dependent gauge fixing conditions in order to reduce them to their physical degrees of freedom. To illustrate our results we discuss the gauge-fixing of relativistic particles and strings moving in arbitrary background electromagnetic and antisymmetric tensor fields.
The Cherenkov correlated timing detector: materials, geometry and timing constraints
International Nuclear Information System (INIS)
Aronstein, D.; Bergfeld, T.; Horton, D.; Palmer, M.; Selen, M.; Thayer, G.; Boyer, V.; Honscheid, K.; Kichimi, H.; Sugaya, Y.; Yamaguchi, H.; Yoshimura, Y.; Kanda, S.; Olsen, S.; Ueno, K.; Tamura, N.; Yoshimura, K.; Lu, C.; Marlow, D.; Mindas, C.; Prebys, E.; Pomianowski, P.
1996-01-01
The key parameters of Cherenkov correlated timing (CCT) detectors are discussed. Measurements of radiator geometry, optical properties of radiator and coupling materials, and photon detector timing performance are presented. (orig.)
Observational constraint on the interacting dark energy models including the Sandage-Loeb test
Zhang, Ming-Jian; Liu, Wen-Biao
2014-05-01
Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational data (OHD), cosmic microwave background shift parameter, and the secular Sandage-Loeb (SL) test. In the investigation, we have used two sets of parameter priors including WMAP-9 and Planck 2013. They have shown some interesting differences. We find that the inclusion of SL test can obviously provide a more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term has been improved to be only a half of the original scale on corresponding errors. Comparing with only SNIa and OHD, we find that the inclusion of the SL test almost reduces the best-fit interaction to zero, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of the interaction. For the varying coupling model, data with the inclusion of the SL test show that the parameter at C.L. in Planck priors is , where the constant is characteristic for the severity of the coincidence problem. This indicates that the coincidence problem will be less severe. We then reconstruct the interaction , and we find that the best-fit interaction is also negative, similar to the constant coupling model. However, for a high redshift, the interaction generally vanishes at infinity. We also find that the phantom-like dark energy with is favored over the CDM model.
Constraints on a parity-even/time-reversal-odd interaction
International Nuclear Information System (INIS)
Oers, Willem T.H. van
2000-01-01
Time-Reversal-Invariance non-conservation has for the first time been unequivocally demonstrated in a direct measurement, one of the results of the CPLEAR experiment. What is the situation then with regard to time-reversal-invariance non-conservation in systems other than the neutral kaon system? Two classes of tests of time-reversal-invariance need to be distinguished: the first one deals with parity violating (P-odd)/time-reversal-invariance non-conserving (T-odd) interactions, while the second one deals with P-even/T-odd interactions (assuming CPT conservation this implies C-conjugation non-conservation). Limits on a P-odd/T-odd interaction follow from measurements of the electric dipole moment of the neutron. This in turn provides a limit on a P-odd/T-odd pion-nucleon coupling constant which is 10 -4 times the weak interaction strength. Limits on a P-even/T-odd interaction are much less stringent. The better constraint stems also from the measurement of the electric dipole moment of the neutron. Of all the other tests, measurements of charge-symmetry breaking in neutron-proton elastic scattering provide the next better constraint. The latter experiments were performed at TRIUMF (at 477 and 347 MeV) and at IUCF (at 183 MeV). Weak decay experiments (the transverse polarization of the muon in K + →π 0 μ + ν μ and the transverse polarization of the positrons in polarized muon decay) have the potential to provide comparable or possibly better constraints
Energy Technology Data Exchange (ETDEWEB)
Guo, Rui-Yun [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)
2017-12-15
We revisit the constraints on inflation models by using the current cosmological observations involving the latest local measurement of the Hubble constant (H{sub 0} = 73.00 ± 1.75 km s{sup -1} Mpc{sup -1}). We constrain the primordial power spectra of both scalar and tensor perturbations with the observational data including the Planck 2015 CMB full data, the BICEP2 and Keck Array CMB B-mode data, the BAO data, and the direct measurement of H{sub 0}. In order to relieve the tension between the local determination of the Hubble constant and the other astrophysical observations, we consider the additional parameter N{sub eff} in the cosmological model. We find that, for the ΛCDM+r+N{sub eff} model, the scale invariance is only excluded at the 3.3σ level, and ΔN{sub eff} > 0 is favored at the 1.6σ level. Comparing the obtained 1σ and 2σ contours of (n{sub s},r) with the theoretical predictions of selected inflation models, we find that both the convex and the concave potentials are favored at 2σ level, the natural inflation model is excluded at more than 2σ level, the Starobinsky R{sup 2} inflation model is only favored at around 2σ level, and the spontaneously broken SUSY inflation model is now the most favored model. (orig.)
Overcoming Learning Time And Space Constraints Through Technological Tool
Directory of Open Access Journals (Sweden)
Nafiseh Zarei
2015-08-01
Full Text Available Today the use of technological tools has become an evolution in language learning and language acquisition. Many instructors and lecturers believe that integrating Web-based learning tools into language courses allows pupils to become active learners during learning process. This study investigate how the Learning Management Blog (LMB overcomes the learning time and space constraints that contribute to students’ language learning and language acquisition processes. The participants were 30 ESL students at National University of Malaysia. A qualitative approach comprising an open-ended questionnaire and a semi-structured interview was used to collect data. The results of the study revealed that the students’ language learning and acquisition processes were enhanced. The students did not face any learning time and space limitations while being engaged in the learning process via the LMB. They learned and acquired knowledge using the language learning materials and forum at anytime and anywhere. Keywords: learning time, learning space, learning management blog
Berge, Britt-Marie; Chounlamany, Kongsy; Khounphilaphanh, Bounchanh; Silfver, Ann-Louise
2017-01-01
This article explores possibilities and constraints for the inclusion of female and ethnic minority students in Lao education in order to provide education for all. Females and ethnic minorities have traditionally been disadvantaged in Lao education and reforms for the inclusion of these groups are therefore welcome. The article provides rich…
Trajectory reshaping based guidance with impact time and angle constraints
Directory of Open Access Journals (Sweden)
Zhao Yao
2016-08-01
Full Text Available This study presents a novel impact time and angle constrained guidance law for homing missiles. The guidance law is first developed with the prior-assumption of a stationary target, which is followed by the practical extension to a maneuvering target scenario. To derive the closed-form guidance law, the trajectory reshaping technique is utilized and it results in defining a specific polynomial function with two unknown coefficients. These coefficients are determined to satisfy the impact time and angle constraints as well as the zero miss distance. Furthermore, the proposed guidance law has three additional guidance gains as design parameters which make it possible to adjust the guided trajectory according to the operational conditions and missile’s capability. Numerical simulations are presented to validate the effectiveness of the proposed guidance law.
Timing of Family Income, Borrowing Constraints and Child Achievement
DEFF Research Database (Denmark)
Humlum, Maria Knoth
In this paper, I investigate the effects of the timing of family income on child achievement production. Detailed administrative data augmented with PISA test scores at age 15 are used to analyze the effects of the timing of family income on child achievement. Contrary to many earlier studies, te...... with generous child and education subsidies. Actually, later family income (age 12-15) is a more important determinant of child achievement than earlier income.......In this paper, I investigate the effects of the timing of family income on child achievement production. Detailed administrative data augmented with PISA test scores at age 15 are used to analyze the effects of the timing of family income on child achievement. Contrary to many earlier studies......, tests for early borrowing constraints suggest that parents are not constrained in early investments in their children's achievement, and thus that the timing of income does not matter for long-term child outcomes. This is a reasonable result given the setting in a Scandinavian welfare state...
DEFF Research Database (Denmark)
Codas, Andrés; Hanssen, Kristian G.; Foss, Bjarne
2017-01-01
The production life of oil reservoirs starts under significant uncertainty regarding the actual economical return of the recovery process due to the lack of oil field data. Consequently, investors and operators make management decisions based on a limited and uncertain description of the reservoir....... In this work, we propose a new formulation for robust optimization of reservoir well controls. It is inspired by the multiple shooting (MS) method which permits a broad range of parallelization opportunities and output constraint handling. This formulation exploits coherent risk measures, a concept...
Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico
2014-01-01
1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...
Clock gene evolution: seasonal timing, phylogenetic signal, or functional constraint?
Krabbenhoft, Trevor J; Turner, Thomas F
2014-01-01
Genetic determinants of seasonal reproduction are not fully understood but may be important predictors of organism responses to climate change. We used a comparative approach to study the evolution of seasonal timing within a fish community in a natural common garden setting. We tested the hypothesis that allelic length variation in the PolyQ domain of a circadian rhythm gene, Clock1a, corresponded to interspecific differences in seasonal reproductive timing across 5 native and 1 introduced cyprinid fishes (n = 425 individuals) that co-occur in the Rio Grande, NM, USA. Most common allele lengths were longer in native species that initiated reproduction earlier (Spearman's r = -0.70, P = 0.23). Clock1a allele length exhibited strong phylogenetic signal and earlier spawners were evolutionarily derived. Aside from length variation in Clock1a, all other amino acids were identical across native species, suggesting functional constraint over evolutionary time. Interestingly, the endangered Rio Grande silvery minnow (Hybognathus amarus) exhibited less allelic variation in Clock1a and observed heterozygosity was 2- to 6-fold lower than the 5 other (nonimperiled) species. Reduced genetic variation in this functionally important gene may impede this species' capacity to respond to ongoing environmental change.
Time for Each Other: Work and Family Constraints Among Couples.
Flood, Sarah M; Genadek, Katie R
2016-02-01
Little is known about couples' shared time and how actual time spent together is associated with well-being. In this study, the authors investigated how work and family demands are related to couples' shared time (total and exclusive) and individual well-being (happiness, meaningfulness, and stress) when with one's spouse. They used individual-level data from the 2003-2010 American Time Use Survey (N = 46,883), including the 2010 Well-Being Module. The results indicated that individuals in full-time working dual-earner couples spend similar amounts of time together as individuals in traditional breadwinner-homemaker arrangements on weekdays after accounting for daily work demands. The findings also show that parents share significantly less total and exclusive spousal time together than nonparents, though there is considerable variation among parents by age of the youngest child. Of significance is that individuals experience greater happiness and meaning and less stress during time spent with a spouse opposed to time spent apart.
Infinite Runs in Weighted Timed Automata with Energy Constraints
DEFF Research Database (Denmark)
Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand
2008-01-01
and locations, corresponding to the production and consumption of some resource (e.g. energy). We ask the question whether there exists an infinite path for which the accumulated weight for any finite prefix satisfies certain constraints (e.g. remains between 0 and some given upper-bound). We also consider...
Power generation scheduling. A free market based procedure with reserve constraints included
International Nuclear Information System (INIS)
Huse, Einar Staale
1998-01-01
This thesis deals with the short-term scheduling of electric power generation in a competitive market. This involves determination of start-ups and shut-downs, and production levels of all units in all hours of the optimization period, considering unit characteristics and system restrictions. The unit characteristics and restrictions handled are minimum and maximum production levels, fuel cost function, start-up costs, minimum up time and minimum down time. The system restrictions handled are power balance (supply equals demand in all hours) and spinning reserve requirement. The thesis has two main contributions: (1) A new organization of an hourly electric power market that simultaneously sets the price of both energy and reserve power is proposed. A power exchange is used as a trading place for electricity. Its responsibility is to balance supply and demand bids and to secure enough spinning reserve. Routines for bidding and market clearing are developed. (2) A computer programme that simulates the proposed electricity market has been implemented. The program can also be used as a new method for solving the single owner generation scheduling problem. Simulations show that the performance of the program is excellent. Simulations also show that it is possible to obtain efficient schedules through the proposed electricity market. 37 refs., 20 figs., 15 tabs
Power generation scheduling. A free market based procedure with reserve constraints included
Energy Technology Data Exchange (ETDEWEB)
Huse, Einar Staale
1999-12-31
This thesis deals with the short-term scheduling of electric power generation in a competitive market. This involves determination of start-ups and shut-downs, and production levels of all units in all hours of the optimization period, considering unit characteristics and system restrictions. The unit characteristics and restrictions handled are minimum and maximum production levels, fuel cost function, start-up costs, minimum up time and minimum down time. The system restrictions handled are power balance (supply equals demand in all hours) and spinning reserve requirement. The thesis has two main contributions: (1) A new organization of an hourly electric power market that simultaneously sets the price of both energy and reserve power is proposed. A power exchange is used as a trading place for electricity. Its responsibility is to balance supply and demand bids and to secure enough spinning reserve. Routines for bidding and market clearing are developed. (2) A computer programme that simulates the proposed electricity market has been implemented. The program can also be used as a new method for solving the single owner generation scheduling problem. Simulations show that the performance of the program is excellent. Simulations also show that it is possible to obtain efficient schedules through the proposed electricity market. 37 refs., 20 figs., 15 tabs.
Power generation scheduling. A free market based procedure with reserve constraints included
Energy Technology Data Exchange (ETDEWEB)
Huse, Einar Staale
1998-12-31
This thesis deals with the short-term scheduling of electric power generation in a competitive market. This involves determination of start-ups and shut-downs, and production levels of all units in all hours of the optimization period, considering unit characteristics and system restrictions. The unit characteristics and restrictions handled are minimum and maximum production levels, fuel cost function, start-up costs, minimum up time and minimum down time. The system restrictions handled are power balance (supply equals demand in all hours) and spinning reserve requirement. The thesis has two main contributions: (1) A new organization of an hourly electric power market that simultaneously sets the price of both energy and reserve power is proposed. A power exchange is used as a trading place for electricity. Its responsibility is to balance supply and demand bids and to secure enough spinning reserve. Routines for bidding and market clearing are developed. (2) A computer programme that simulates the proposed electricity market has been implemented. The program can also be used as a new method for solving the single owner generation scheduling problem. Simulations show that the performance of the program is excellent. Simulations also show that it is possible to obtain efficient schedules through the proposed electricity market. 37 refs., 20 figs., 15 tabs.
Effects of Social Constraints on Career Maturity: The Mediating Effect of the Time Perspective
Kim, Kyung-Nyun; Oh, Se-Hee
2013-01-01
Previous studies have provided mixed results for the effects of social constraints on career maturity. However, there has been growing interest in these effects from the time perspective. Few studies have examined the effects of social constraints on the time perspective which in turn influences career maturity. This study examines the mediating…
Time constraints in the treatment of nuclear transients - MONSTREAV code
International Nuclear Information System (INIS)
Amorim, E.S. do; Sudano, J.P.; Moura Neto, C. de; Ferreira, W.J.
1980-08-01
An improved approach to the spatial dynamics problem is described. This approach alows the factorization of the flux in amplitude and shape functions. Boundaries conditions are treated as time dependent functions and the coupling between functions is treated as the improved quasistatic approximation (1,2). A burnup feedback has been included allowing to describe extreme excursion in fast reactors very accurately even for treatment of nonlinear problems. A benchmark analysis shows that an improved method is fully sufficient for fast reactor dynamics calculations. Computation modules embodying the improved model of neutronic behaviour will be integrared with the other tests of the fast reactor dynamics analysis system now under development at EAV-IAE. (Author) [pt
Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.
2016-01-01
The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.
ALGORITHMIC CONSTRUCTION SCHEDULES IN CONDITIONS OF TIMING CONSTRAINTS
Directory of Open Access Journals (Sweden)
Alexey S. Dobrynin
2014-01-01
Full Text Available Tasks of time-schedule construction (JSSP in various fields of human activities have an important theoretical and practical significance. The main feature of these tasks is a timing requirement, describing allowed planning time periods and periods of downtime. This article describes implementation variations of the work scheduling algorithm under timing requirements for the tasks of industrial time-schedules construction, and service activities.
Finite-time stabilisation of a class of switched nonlinear systems with state constraints
Huang, Shipei; Xiang, Zhengrong
2018-06-01
This paper investigates the finite-time stabilisation for a class of switched nonlinear systems with state constraints. Some power orders of the system are allowed to be ratios of positive even integers over odd integers. A Barrier Lyapunov function is introduced to guarantee that the state constraint is not violated at any time. Using the convex combination method and a recursive design approach, a state-dependent switching law and state feedback controllers of individual subsystems are constructed such that the closed-loop system is finite-time stable without violation of the state constraint. Two examples are provided to show the effectiveness of the proposed method.
Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G
2016-11-01
Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.
RealWorld evaluation: working under budget, time, data, and political constraints
National Research Council Canada - National Science Library
Bamberger, Michael; Rugh, Jim; Mabry, Linda
2012-01-01
This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing...
Experimental Constraints of the Exotic Shearing of Space-Time
Energy Technology Data Exchange (ETDEWEB)
Richardson, Jonathan William [Univ. of Chicago, IL (United States)
2016-08-01
The Holometer program is a search for rst experimental evidence that space-time has quantum structure. The detector consists of a pair of co-located 40-m power-recycled interferometers whose outputs are read out synchronously at 50 MHz, achieving sensitivity to spatiallycorrelated uctuations in dierential position on time scales shorter than the light-crossing time of the instruments. Unlike gravitational wave interferometers, which time-resolve transient geometrical disturbances in the spatial background, the Holometer is searching for a universal, stationary quantization noise of the background itself. This dissertation presents the nal results of the Holometer Phase I search, an experiment congured for sensitivity to exotic coherent shearing uctuations of space-time. Measurements of high-frequency cross-spectra of the interferometer signals obtain sensitivity to spatially-correlated eects far exceeding any previous measurement, in a broad frequency band extending to 7.6 MHz, twice the inverse light-crossing time of the apparatus. This measurement is the statistical aggregation of 2.1 petabytes of 2-byte dierential position measurements obtained over a month-long exposure time. At 3 signicance, it places an upper limit on the coherence scale of spatial shear two orders of magnitude below the Planck length. The result demonstrates the viability of this novel spatially-correlated interferometric detection technique to reach unprecedented sensitivity to coherent deviations of space-time from classicality, opening the door for direct experimental tests of theories of relational quantum gravity.
Timing of Family Income, Borrowing Constraints, and Child Achievement
DEFF Research Database (Denmark)
Humlum, Maria Knoth
2011-01-01
to many earlier studies, the results suggest that the timing of income does not matter for long-term child outcomes. This is a reasonable result given the setting in a Scandinavian welfare state with generous child and education subsidies. Actually, later family income (age 12–15) is a more important......I investigate the effects of the timing of family income on child achievement production. Detailed administrative data augmented with Programme for International Student Assessment test scores at age 15 are used to analyze the effects of the timing of family income on child achievement. Contrary...... determinant of child achievement than earlier income....
Directory of Open Access Journals (Sweden)
Shu-Min Lu
2017-01-01
Full Text Available An adaptive neural network control problem is addressed for a class of nonlinear hydraulic servo-systems with time-varying state constraints. In view of the low precision problem of the traditional hydraulic servo-system which is caused by the tracking errors surpassing appropriate bound, the previous works have shown that the constraint for the system is a good way to solve the low precision problem. Meanwhile, compared with constant constraints, the time-varying state constraints are more general in the actual systems. Therefore, when the states of the system are forced to obey bounded time-varying constraint conditions, the high precision tracking performance of the system can be easily realized. In order to achieve this goal, the time-varying barrier Lyapunov function (TVBLF is used to prevent the states from violating time-varying constraints. By the backstepping design, the adaptive controller will be obtained. A radial basis function neural network (RBFNN is used to estimate the uncertainties. Based on analyzing the stability of the hydraulic servo-system, we show that the error signals are bounded in the compacts sets; the time-varying state constrains are never violated and all singles of the hydraulic servo-system are bounded. The simulation and experimental results show that the tracking accuracy of system is improved and the controller has fast tracking ability and strong robustness.
Algorithmic power management - Energy minimisation under real-time constraints
Gerards, Marco Egbertus Theodorus
2014-01-01
Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the
Algorithmic power management: energy minimisation under real-time constraints
Gerards, Marco Egbertus Theodorus
2014-01-01
Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the
Observational constraints to boxy/peanut bulge formation time
Pérez, I.; Martínez-Valpuesta, I.; Ruiz-Lara, T.; de Lorenzo-Caceres, A.; Falcón-Barroso, J.; Florido, E.; González Delgado, R. M.; Lyubenova, M.; Marino, R. A.; Sánchez, S. F.; Sánchez-Blázquez, P.; van de Ven, G.; Zurita, A.
2017-09-01
Boxy/peanut bulges are considered to be part of the same stellar structure as bars and both could be linked through the buckling instability. The Milky Way is our closest example. The goal of this Letter is to determine if the mass assembly of the different components leaves an imprint in their stellar populations allowing the estimation the time of bar formation and its evolution. To this aim, we use integral field spectroscopy to derive the stellar age distributions, SADs, along the bar and disc of NGC 6032. The analysis clearly shows different SADs for the different bar areas. There is an underlying old (≥12 Gyr) stellar population for the whole galaxy. The bulge shows star formation happening at all times. The inner bar structure shows stars of ages older than 6 Gyr with a deficit of younger populations. The outer bar region presents an SAD similar to that of the disc. To interpret our results, we use a generic numerical simulation of a barred galaxy. Thus, we constrain, for the first time, the epoch of bar formation, the buckling instability period and the posterior growth from disc material. We establish that the bar of NGC 6032 is old, formed around 10 Gyr ago while the buckling phase possibly happened around 8 Gyr ago. All these results point towards bars being long-lasting even in the presence of gas.
Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints
Directory of Open Access Journals (Sweden)
Qiang Zhang
2014-01-01
Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.
Inferring time derivatives including cell growth rates using Gaussian processes
Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta
2016-12-01
Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.
Statistical quality analysis of schedulers under soft-real-time constraints
Baarsma, H.E.; Hurink, Johann L.; Jansen, P.G.
2007-01-01
This paper describes an algorithm to determine the performance of real-time systems with tasks using stochastic processing times. Such an algorithm can be used for guaranteeing Quality of Service of periodic tasks with soft real-time constraints. We use a discrete distribution model of processing
Rasouli, S.; Timmermans, H.J.P.
2014-01-01
The aim of this paper is to assess the impact of uncertain travel times as reflected in travel time variability on the outcomes of individuals’ activity–travel scheduling decisions, assuming they are faced with fixed space–time constraints and apply the set of decision rules that they have developed
Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints
Directory of Open Access Journals (Sweden)
Raphaël Beamonte
2016-01-01
Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.
A Pilot Study Examining the Effects of Time Constraints on Student Performance in Accounting Classes
Morris, David E., Sr.; Scott, John
2017-01-01
The purpose of this study was to examine the effects, if any, of time constraints on the success of accounting students completing exams. This study examined how time allowed to take exams affected the grades on examinations in three different accounting classes. Two were sophomore classes and one was a senior accounting class. This limited pilot…
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.
2017-12-01
Recent receiver function studies of the North American craton suggest the presence of significant layering within the cratonic lithosphere, with significant lateral variations in the depth of the velocity discontinuities. These structural boundaries have been confirmed recently using a transdimensional Markov Chain Monte Carlo approach (TMCMC), inverting surface wave dispersion data and converted phases simultaneously (Calò et al., 2016; Roy and Romanowicz 2017). The lateral resolution of upper mantle structure can be improved with a high density of broadband seismic stations, or with a sparse network using full waveform inversion based on numerical wavefield computation methods such as the Spectral Element Method (SEM). However, inverting for discontinuities with strong topography such as MLDS's or LAB, presents challenges in an inversion framework, both computationally, due to the short periods required, and from the point of view of stability of the inversion. To overcome these limitations, and to improve resolution of layering in the upper mantle, we are developing a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. We have extended the 30 1D radially anisotropic shear velocity profiles of Calò et al. 2016 to several other stations, for which we used a recent shear velocity model (Clouzet et al., 2017) as constraint in the modeling. These 1D profiles, including both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) homogenization of the layered 1D models and 2) interpolation between the 1D smooth profiles and the model of Clouzet et al. 2017, resulting in a smooth 3D starting model. Waveforms used in the inversion are filtered at periods longer than 30s. We use the SEM code "RegSEM" for forward computations and a quasi-Newton inversion
Directory of Open Access Journals (Sweden)
Milla Salin
2014-11-01
Full Text Available The aim of this study was to analyze mothers’ working time patters across 22 European countries. The focu was on three questions: how much mothers prefer to work, how much they actually work, and to what degree their preferred and actual working times are (inconsistent with each other. The focus was on cross-national differences in mothers’ working time patterns, comparison of mothers’ working times to that of childless women and fathers, as well as on individual- and country-level factors that explain the variation between them. In the theoretical background, the departure point was an integrative theoretical approach where the assumption is that there are various kinds of explanations for the differences in mothers’ working time patterns – namely structural, cultural and institutional – , and that these factors are laid in two levels: individual- and country-levels. Data were extracted from the European Social Survey (ESS 2010 / 2011. The results showed that mothers’ working time patterns, both preferred and actual working times, varied across European countries. Four clusters were formed to illustrate the differences. In the full-time pattern, full-time work was the most important form of work, leaving all other working time forms marginal. The full-time pattern was perceived in terms of preferred working times in Bulgaria and Portugal. In polarised pattern countries, full-time work was also important, but it was accompanied by a large share of mothers not working at all. In the case of preferred working times, many Eastern and Southern European countries followed it whereas in terms of actual working times it included all Eastern and Southern European countries as well as Finland. The combination pattern was characterised by the importance of long part-time hours and full-time work. It was the preferred working time pattern in the Nordic countries, France, Slovenia, and Spain, but Belgium, Denmark, France, Norway, and Sweden
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
Directory of Open Access Journals (Sweden)
Alkın Yurtkuran
2014-01-01
Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Energy Technology Data Exchange (ETDEWEB)
Plotkin, S.; Stephens, T.; McManus, W.
2013-03-01
Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.
Energy Technology Data Exchange (ETDEWEB)
Plotkin, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Stephens, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); McManus, Walter [Oakland Univ., Rochester, MI (United States)
2013-03-01
Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.
Resolving relative time expressions in Dutch text with Constraint Handling Rules
DEFF Research Database (Denmark)
van de Camp, Matje; Christiansen, Henning
2012-01-01
It is demonstrated how Constraint Handling Rules can be applied for resolution of indirect and relative time expressions in text as part of a shallow analysis, following a specialized tagging phase. A method is currently under development, optimized for a particular corpus of historical biographies...
Yang, He; Hutchinson, Susan; Zinn, Harry; Watson, Alan
2011-01-01
How people make choices about activity engagement during discretionary time is a topic of increasing interest to those studying quality of life issues. Assuming choices are made to maximize individual welfare, several factors are believed to influence these choices. Constraints theory from the leisure research literature suggests these choices are…
He Yang; Susan Hutchinson; Harry Zinn; Alan Watson
2011-01-01
How people make choices about activity engagement during discretionary time is a topic of increasing interest to those studying quality of life issues. Assuming choices are made to maximize individual welfare, several factors are believed to influence these choices. Constraints theory from the leisure research literature suggests these choices are heavily influenced by...
Selecting local constraint for alignment of batch process data with dynamic time warping
DEFF Research Database (Denmark)
Spooner, Max Peter; Kold, David; Kulahci, Murat
2017-01-01
” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid...
Schmidt, Georg
Today, healthcare facilities are highly dependent on the private sector to keep their medical equipment functioning. Moreover, private sector involvement becomes particularly important for the supply of spare parts and consumables. However, in times of armed conflict, the capacity of the corporate world appears to be seriously hindered. Subsequently, this study researches the influence of armed conflict on the private medical equipment sector. This study follows a qualitative approach by conducting 19 interviews with representatives of the corporate world in an active conflict zone. A semistructured interview guide, consisting of 10 questions, was used to examine the constraints of this sector. The results reveal that the lack of skilled personnel, complicated importation procedures, and a decrease in financial capacity are the major constraints faced by private companies dealing in medical equipment in conflict zones. Even when no official sanctions and embargoes for medical items exist, constraints for trading medical equipment are clearly recognizable. Countries at war would benefit from a centralized structure that deals with the importation procedures for medical items, to assist local companies in their purchasing procedures. A high degree of adaption is needed to continue operating, despite the emerging constraints of armed conflict. Future studies might research the constraints for manufacturers outside the conflict to export medical items to the country of war.
Walking to the Beat of Their Own Drum: How Children and Adults Meet Timing Constraints
Gill, Simone V.
2015-01-01
Walking requires adapting to meet task constraints. Between 5- and 7-years old, children’s walking approximates adult walking without constraints. To examine how children and adults adapt to meet timing constraints, 57 5- to 7-year olds and 20 adults walked to slow and fast audio metronome paces. Both children and adults modified their walking. However, at the slow pace, children had more trouble matching the metronome compared to adults. The youngest children’s walking patterns deviated most from the slow metronome pace, and practice improved their performance. Five-year olds were the only group that did not display carryover effects to the metronome paces. Findings are discussed in relation to what contributes to the development of adaptation in children. PMID:26011538
Walking to the beat of their own drum: how children and adults meet timing constraints.
Directory of Open Access Journals (Sweden)
Simone V Gill
Full Text Available Walking requires adapting to meet task constraints. Between 5- and 7-years old, children's walking approximates adult walking without constraints. To examine how children and adults adapt to meet timing constraints, 57 5- to 7-year olds and 20 adults walked to slow and fast audio metronome paces. Both children and adults modified their walking. However, at the slow pace, children had more trouble matching the metronome compared to adults. The youngest children's walking patterns deviated most from the slow metronome pace, and practice improved their performance. Five-year olds were the only group that did not display carryover effects to the metronome paces. Findings are discussed in relation to what contributes to the development of adaptation in children.
Timing Constraints Based High Performance Des Design And Implementation On 28nm FPGA
DEFF Research Database (Denmark)
Thind, Vandana; Pandey, Sujeet; Hussain, Dil muhammed Akbar
2018-01-01
in this work, we are going to implement DES Algorithm on 28nm Artix-7 FPGA. To achieve high performance design goal, we are using minimum period, maximum frequency, minimum low pulse, minimum high pulse for different cases of worst case slack, maximum delay, setup time, hold time and data skew path....... The cases on which analysis is done are like worst case slack, best case achievable, timing error and timing score, which help in differentiating the amount of timing constraint at two different frequencies. We analyzed that in timing analysis there is maximum of 19.56% of variation in worst case slack, 0...
International Nuclear Information System (INIS)
Guardiola, Carlos; Climent, Héctor; Pla, Benjamín; Reig, Alberto
2017-01-01
Highlights: • Optimal Control is applied for heat release shaping in internal combustion engines. • Optimal Control allows to assess the engine performance with a realistic reference. • The proposed method gives a target heat release law to define control strategies. - Abstract: The present paper studies the optimal heat release law in a Diesel engine to maximise the indicated efficiency subject to different constraints, namely: maximum cylinder pressure, maximum cylinder pressure derivative, and NO_x emission restrictions. With this objective, a simple but also representative model of the combustion process has been implemented. The model consists of a 0D energy balance model aimed to provide the pressure and temperature evolutions in the high pressure loop of the engine thermodynamic cycle from the gas conditions at the intake valve closing and the heat release law. The gas pressure and temperature evolutions allow to compute the engine efficiency and NO_x emissions. The comparison between model and experimental results shows that despite the model simplicity, it is able to reproduce the engine efficiency and NO_x emissions. After the model identification and validation, the optimal control problem is posed and solved by means of Dynamic Programming (DP). Also, if only pressure constraints are considered, the paper proposes a solution that reduces the computation cost of the DP strategy in two orders of magnitude for the case being analysed. The solution provides a target heat release law to define injection strategies but also a more realistic maximum efficiency boundary than the ideal thermodynamic cycles usually employed to estimate the maximum engine efficiency.
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
Directory of Open Access Journals (Sweden)
He Cheng
2014-02-01
Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm
New constraints on time-dependent variations of fundamental constants using Planck data
Hart, Luke; Chluba, Jens
2018-02-01
Observations of the cosmic microwave background (CMB) today allow us to answer detailed questions about the properties of our Universe, targeting both standard and non-standard physics. In this paper, we study the effects of varying fundamental constants (i.e. the fine-structure constant, αEM, and electron rest mass, me) around last scattering using the recombination codes COSMOREC and RECFAST++. We approach the problem in a pedagogical manner, illustrating the importance of various effects on the free electron fraction, Thomson visibility function and CMB power spectra, highlighting various degeneracies. We demonstrate that the simpler RECFAST++ treatment (based on a three-level atom approach) can be used to accurately represent the full computation of COSMOREC. We also include explicit time-dependent variations using a phenomenological power-law description. We reproduce previous Planck 2013 results in our analysis. Assuming constant variations relative to the standard values, we find the improved constraints αEM/αEM, 0 = 0.9993 ± 0.0025 (CMB only) and me/me, 0 = 1.0039 ± 0.0074 (including BAO) using Planck 2015 data. For a redshift-dependent variation, αEM(z) = αEM(z0) [(1 + z)/1100]p with αEM(z0) ≡ αEM, 0 at z0 = 1100, we obtain p = 0.0008 ± 0.0025. Allowing simultaneous variations of αEM(z0) and p yields αEM(z0)/αEM, 0 = 0.9998 ± 0.0036 and p = 0.0006 ± 0.0036. We also discuss combined limits on αEM and me. Our analysis shows that existing data are not only sensitive to the value of the fundamental constants around recombination but also its first time derivative. This suggests that a wider class of varying fundamental constant models can be probed using the CMB.
Lam, Tram Kim; Schully, Sheri D; Rogers, Scott D; Benkeser, Rachel; Reid, Britt; Khoury, Muin J
2013-04-01
In a time of scientific and technological developments and budgetary constraints, the National Cancer Institute's (NCI) Provocative Questions Project offers a novel funding mechanism for cancer epidemiologists. We reviewed the purposes underlying the Provocative Questions Project, present information on the contributions of epidemiologic research to the current Provocative Questions portfolio, and outline opportunities that the cancer epidemiology community might capitalize on to advance a research agenda that spans a translational continuum from scientific discoveries to population health impact.
Scheduling of Fault-Tolerant Embedded Systems with Soft and Hard Timing Constraints
DEFF Research Database (Denmark)
Izosimov, Viacheslav; Pop, Paul; Eles, Petru
2008-01-01
In this paper we present an approach to the synthesis of fault-tolerant schedules for embedded applications with soft and hard real-time constraints. We are interested to guarantee the deadlines for the hard processes even in the case of faults, while maximizing the overall utility. We use time....../utility functions to capture the utility of soft processes. Process re-execution is employed to recover from multiple faults. A single static schedule computed off-line is not fault tolerant and is pessimistic in terms of utility, while a purely online approach, which computes a new schedule every time a process...
Quantum states and the Hadamard form. III. Constraints in cosmological space-times
International Nuclear Information System (INIS)
Najmi, A.; Ottewill, A.C.
1985-01-01
We examine the constraints on the construction of Fock spaces for scalar fields in spatially flat Robertson-Walker space-times imposed by requiring that the vacuum state of the theory have a two-point function possessing the Hadamard singularity structure required by standard renormalization theory. It is shown that any such vacuum state must be a second-order adiabatic vacuum. We discuss the global requirements on the two-point function for it to possess the Hadamard form at all times if it possesses it at one time
Automatic pickup of arrival time of channel wave based on multi-channel constraints
Wang, Bao-Li
2018-03-01
Accurately detecting the arrival time of a channel wave in a coal seam is very important for in-seam seismic data processing. The arrival time greatly affects the accuracy of the channel wave inversion and the computed tomography (CT) result. However, because the signal-to-noise ratio of in-seam seismic data is reduced by the long wavelength and strong frequency dispersion, accurately timing the arrival of channel waves is extremely difficult. For this purpose, we propose a method that automatically picks up the arrival time of channel waves based on multi-channel constraints. We first estimate the Jaccard similarity coefficient of two ray paths, then apply it as a weight coefficient for stacking the multichannel dispersion spectra. The reasonableness and effectiveness of the proposed method is verified in an actual data application. Most importantly, the method increases the degree of automation and the pickup precision of the channel-wave arrival time.
Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints
Directory of Open Access Journals (Sweden)
Cuixia Miao
2015-01-01
Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.
Directory of Open Access Journals (Sweden)
Seetaiah KILARU
2015-12-01
Full Text Available Popular network architectures are following packet based architectures instead of conventional Time division multiplexing. The existed Ethernet is basically asynchronous in nature and was not designed based on timing transfer constraints. To achieve the challenge of next generation network with respect to efficient bandwidth and faster data rates, we have to deploy the network which has less latency. This can be achieved by Synchronous Ethernet (SyncE. In Sync-E, Phase Locked Loop (PLL was used to recover the incoming jitter from clock recovery circuit. Then feed the PLL block to transmission device. We have to design the network in an unaffected way that the functions of Ethernet should run in normal way even we introduced timing path at physical layer. This paper will give detailed outlook on how Sync-E is achieved from Asynchronous format. Reference model of 100 Base-TX/FX was analyzed with respect to timing and interference constraints. Finally, it was analyzed with the data rate improvement with the proposed method.
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
The Optimization of Transportation Costs in Logistics Enterprises with Time-Window Constraints
Directory of Open Access Journals (Sweden)
Qingyou Yan
2015-01-01
Full Text Available This paper presents a model for solving a multiobjective vehicle routing problem with soft time-window constraints that specify the earliest and latest arrival times of customers. If a customer is serviced before the earliest specified arrival time, extra inventory costs are incurred. If the customer is serviced after the latest arrival time, penalty costs must be paid. Both the total transportation cost and the required fleet size are minimized in this model, which also accounts for the given capacity limitations of each vehicle. The total transportation cost consists of direct transportation costs, extra inventory costs, and penalty costs. This multiobjective optimization is solved by using a modified genetic algorithm approach. The output of the algorithm is a set of optimal solutions that represent the trade-off between total transportation cost and the fleet size required to service customers. The influential impact of these two factors is analyzed through the use of a case study.
New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints
Directory of Open Access Journals (Sweden)
Qing Lu
2014-01-01
Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.
Location-Dependent Query Processing Under Soft Real-Time Constraints
Directory of Open Access Journals (Sweden)
Zoubir Mammeri
2009-01-01
Full Text Available In recent years, mobile devices and applications achieved an increasing development. In database field, this development required methods to consider new query types like location-dependent queries (i.e. the query results depend on the query issuer location. Although several researches addressed problems related to location-dependent query processing, a few works considered timing requirements that may be associated with queries (i.e., the query results must be delivered to mobile clients on time. The main objective of this paper is to propose a solution for location-dependent query processing under soft real-time constraints. Hence, we propose methods to take into account client location-dependency and to maximize the percentage of queries respecting their deadlines. We validate our proposal by implementing a prototype based on Oracle DBMS. Performance evaluation results show that the proposed solution optimizes the percentage of queries meeting their deadlines and the communication cost.
Impact Angle and Time Control Guidance Under Field-of-View Constraints and Maneuver Limits
Shim, Sang-Wook; Hong, Seong-Min; Moon, Gun-Hee; Tahk, Min-Jea
2018-04-01
This paper proposes a guidance law which considers the constraints of seeker field-of-view (FOV) as well as the requirements on impact angle and time. The proposed guidance law is designed for a constant speed missile against a stationary target. The guidance law consists of two terms of acceleration commands. The first one is to achieve zero-miss distance and the desired impact angle, while the second is to meet the desired impact time. To consider the limits of FOV and lateral maneuver capability, a varying-gain approach is applied on the second term. Reduction of realizable impact times due to these limits is then analyzed by finding the longest course among the feasible ones. The performance of the proposed guidance law is demonstrated by numerical simulation for various engagement conditions.
Stack Memory Implementation and Analysis of Timing Constraint, Power and Memory using FPGA
DEFF Research Database (Denmark)
Thind, Vandana; Pandey, Nisha; Pandey, Bishwajeet
2017-01-01
real-time output, so that source used to realize the project is not wasted and get an energy efficient design. However, Stack memory is an approach in which information is entered and deleted from the stack memory segment in the pattern of last in first out mechanism. There are several ways...... of implementation of stack memory algorithm but virtex4 and virtex7 low voltage were considered to be the most efficient platforms for its operation. The developed system is energy efficient as the algorim ensures less memory utilization, less power consumption and short time for signal travel.......Abstract— in this work of analysis, stack memory algorithm is implemented on a number of FPGA platforms like virtex4, virtex5, virtex6, virtex6 low power and virtex7 low voltage and very detailed observations/investigations were made about timing constraint, memory and power dissipation. The main...
Terrestrial carbon turnover time constraints on future carbon cycle-climate feedback
Fan, N.; Carvalhais, N.; Reichstein, M.
2017-12-01
Understanding the terrestrial carbon cycle-climate feedback is essential to reduce the uncertainties resulting from the between model spread in prognostic simulations (Friedlingstein et al., 2006). One perspective is to investigate which factors control the variability of the mean residence times of carbon in the land surface, and how these may change in the future, consequently affecting the response of the terrestrial ecosystems to changes in climate as well as other environmental conditions. Carbon turnover time of the whole ecosystem is a dynamic parameter that represents how fast the carbon cycle circulates. Turnover time τ is an essential property for understanding the carbon exchange between the land and the atmosphere. Although current Earth System Models (ESMs), supported by GVMs for the description of the land surface, show a strong convergence in GPP estimates, but tend to show a wide range of simulated turnover times (Carvalhais, 2014). Thus, there is an emergent need of constraints on the projected response of the balance between terrestrial carbon fluxes and carbon stock which will give us more certainty in response of carbon cycle to climate change. However, the difficulty of obtaining such a constraint is partly due to lack of observational data on temporal change of terrestrial carbon stock. Since more new datasets of carbon stocks such as SoilGrid (Hengl, et al., 2017) and fluxes such as GPP (Jung, et al., 2017) are available, improvement in estimating turnover time can be achieved. In addition, previous study ignored certain aspects such as the relationship between τ and nutrients, fires, etc. We would like to investigate τ and its role in carbon cycle by combining observatinoal derived datasets and state-of-the-art model simulations.
Smalle, Eleonore H M; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter
2017-11-01
Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on another phoneme within the sequence (e.g., /t/ can only be an onset if the medial vowel is /i/), but not earlier than the second day of training. Thus far, no work has been done with children. In the current 4-day experiment, a group of Dutch-speaking adults and 9-year-old children were asked to rapidly recite sequences of novel word forms (e.g., kieng nief siet hiem ) that were consistent with phonotactics of the spoken Dutch language. Within the procedure of the experiment, some consonants (i.e., /t/ and /k/) were restricted to the onset or coda position depending on the medial vowel (i.e., /i/ or "ie" vs. /øː/ or "eu"). Speech errors in adults revealed a learning effect for the novel constraints on the second day of learning, consistent with earlier findings. A post hoc analysis at the trial level showed that learning was statistically reliable after an exposure of 120 sequence trials (including a consolidation period). However, children started learning the constraints already on the first day. More precisely, the effect appeared significantly after an exposure of 24 sequences. These findings indicate that children are rapid implicit learners of novel phonotactics, which bears important implications for theorizing about developmental sensitivities in language learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The impact of weight classification on safety: timing steps to adapt to external constraints
Gill, S.V.
2015-01-01
Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658
Finite Time Merton Strategy under Drawdown Constraint: A Viscosity Solution Approach
International Nuclear Information System (INIS)
Elie, R.
2008-01-01
We consider the optimal consumption-investment problem under the drawdown constraint, i.e. the wealth process never falls below a fixed fraction of its running maximum. We assume that the risky asset is driven by the constant coefficients Black and Scholes model and we consider a general class of utility functions. On an infinite time horizon, Elie and Touzi (Preprint, [2006]) provided the value function as well as the optimal consumption and investment strategy in explicit form. In a more realistic setting, we consider here an agent optimizing its consumption-investment strategy on a finite time horizon. The value function interprets as the unique discontinuous viscosity solution of its corresponding Hamilton-Jacobi-Bellman equation. This leads to a numerical approximation of the value function and allows for a comparison with the explicit solution in infinite horizon
The ESS and replicator equation in matrix games under time constraints.
Garay, József; Cressman, Ross; Móri, Tamás F; Varga, Tamás
2018-06-01
Recently, we introduced the class of matrix games under time constraints and characterized the concept of (monomorphic) evolutionarily stable strategy (ESS) in them. We are now interested in how the ESS is related to the existence and stability of equilibria for polymorphic populations. We point out that, although the ESS may no longer be a polymorphic equilibrium, there is a connection between them. Specifically, the polymorphic state at which the average strategy of the active individuals in the population is equal to the ESS is an equilibrium of the polymorphic model. Moreover, in the case when there are only two pure strategies, a polymorphic equilibrium is locally asymptotically stable under the replicator equation for the pure-strategy polymorphic model if and only if it corresponds to an ESS. Finally, we prove that a strict Nash equilibrium is a pure-strategy ESS that is a locally asymptotically stable equilibrium of the replicator equation in n-strategy time-constrained matrix games.
First passage time for a diffusive process under a geometric constraint
International Nuclear Information System (INIS)
Tateishi, A A; Michels, F S; Dos Santos, M A F; Lenzi, E K; Ribeiro, H V
2013-01-01
We investigate the solutions, survival probability, and first passage time for a two-dimensional diffusive process subjected to the geometric constraints of a backbone structure. We consider this process governed by a fractional Fokker–Planck equation by taking into account the boundary conditions ρ(0,y;t) = ρ(∞,y;t) = 0, ρ(x, ± ∞;t) = 0, and an arbitrary initial condition. Our results show an anomalous spreading and, consequently, a nonusual behavior for the survival probability and for the first passage time distribution that may be characterized by different regimes. In addition, depending on the choice of the parameters present in the fractional Fokker–Planck equation, the survival probability indicates that part of the system may be trapped in the branches of the backbone structure. (paper)
Memory State Feedback RMPC for Multiple Time-Delayed Uncertain Linear Systems with Input Constraints
Directory of Open Access Journals (Sweden)
Wei-Wei Qin
2014-01-01
Full Text Available This paper focuses on the problem of asymptotic stabilization for a class of discrete-time multiple time-delayed uncertain linear systems with input constraints. Then, based on the predictive control principle of receding horizon optimization, a delayed state dependent quadratic function is considered for incorporating MPC problem formulation. By developing a memory state feedback controller, the information of the delayed plant states can be taken into full consideration. The MPC problem is formulated to minimize the upper bound of infinite horizon cost that satisfies the sufficient conditions. Then, based on the Lyapunov-Krasovskii function, a delay-dependent sufficient condition in terms of linear matrix inequality (LMI can be derived to design a robust MPC algorithm. Finally, the digital simulation results prove availability of the proposed method.
Two-agent cooperative search using game models with endurance-time constraints
Sujit, P. B.; Ghose, Debasish
2010-07-01
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints
Directory of Open Access Journals (Sweden)
Baofeng Wang
2014-01-01
Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.
Vyverberg, K.; Dechnik, B.; Dutton, A.; Webster, J.; Zwartz, D.; Edwards, R. L.
2016-12-01
Projecting the rate of future sea-level rise remains a primary challenge associated with continued climate change. However, uncertainties remain in our understanding of the rate of polar ice sheet retreat in warmer-than-present climates. To address this issue, we present a new sea level reconstruction from the tectonically stable granitic Seychelles based on Last Interglacial coral ages and elevations within their sedimentary and stratigraphic context, including estimates of paleo-water depth based on newly defined coralgal assemblages. The reef facies analyzed here has a narrow and shallow paleowater depth range (dwelling barnacles. These disturbance layers may have been generated through internal reef processes and/or external agents, including coral disease, bleaching, predation, hurricanes, or sub-aerial exposure. In total, these new observations provide improved constraints on the timing, magnitude, and rates of sea-level rise during the Last Interglacial.
Pelletier, Jennifer E; Laska, Melissa N
2012-01-01
To characterize associations between perceived time constraints for healthy eating and work, school, and family responsibilities among young adults. Cross-sectional survey. A large, Midwestern metropolitan region. A diverse sample of community college (n = 598) and public university (n = 603) students. Time constraints in general, as well as those specific to meal preparation/structure, and perceptions of a healthy life balance. Chi-square tests and multivariate logistic regression (α = .005). Women, 4-year students, and students with lower socioeconomic status perceived more time constraints (P balance (P ≤ .003). Having a heavy course load and working longer hours were important predictors of time constraints among men (P life balance despite multiple time demands. Interventions focused on improved time management strategies and nutrition-related messaging to achieve healthy diets on a low time budget may be more successful if tailored to the factors that contribute to time constraints separately among men and women. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Optimum filters with time width constraints for liquid argon total-absorption detectors
International Nuclear Information System (INIS)
Gatti, E.; Radeka, V.
1977-10-01
Optimum filter responses are found for triangular current input pulses occurring in liquid argon ionization chambers used as total absorption detectors. The filters considered are subject to the following constraints: finite width of the output pulse having a prescribed ratio to the width of the triangular input current pulse and zero area of a bipolar antisymmetrical pulse or of a three lobe pulse, as required for high event rates. The feasibility of pulse shaping giving an output equal to, or shorter than, the input one is demonstrated. It is shown that the signal-to-noise ratio remains constant for the chamber interelectrode gap which gives an input pulse width (i.e., electron drift time) greater than one third of the required output pulse width
Directory of Open Access Journals (Sweden)
Botond Molnár
Full Text Available There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT, which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect.
Zhang, Nannnan; Wang, Rongbao; Zhang, Feng
2018-04-01
Serious land desertification and sandified threaten the urban ecological security and the sustainable economic and social development. In recent years, a large number of mobile sand dunes in Horqin sandy land flow into the northwest of Liaoning Province under the monsoon, make local agriculture suffer serious harm. According to the characteristics of desertification land in northwestern Liaoning, based on the First National Geographical Survey data, the Second National Land Survey data and the 1984-2014 Landsat satellite long time sequence data and other multi-source data, we constructed a remote sensing monitoring index system of desertification land in Northwest Liaoning. Through the analysis of space-time-spectral characteristics of desertification land, a method for multi-spectral remote sensing image recognition of desertification land under time-space constraints is proposed. This method was used to identify and extract the distribution and classification of desertification land of Chaoyang City (a typical citie of desertification in northwestern Liaoning) in 2008 and 2014, and monitored the changes and transfers of desertification land from 2008 to 2014. Sandification information was added to the analysis of traditional landscape changes, improved the analysis model of desertification land landscape index, and the characteristics and laws of landscape dynamics and landscape pattern change of desertification land from 2008 to 2014 were analyzed and revealed.
Time domain localization technique with sparsity constraint for imaging acoustic sources
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
Constraints of a parity-conserving/time-reversal-non-conserving interaction
International Nuclear Information System (INIS)
Oers, Willem T.H. van
2002-01-01
Time-Reversal-Invariance non-conservation has for the first time been unequivocally demonstrated in a direct measurement at CPLEAR. One then can ask the question: What about tests of time-reversal-invariance in systems other than the kaon system? Tests of time-reversal-invariance can be distinguished as belonging to two classes: the first one deals with time-reversal-invariance-non-conserving (T-odd)/parity violating (P-odd) interactions, while the second one deals with T-odd/P-even interactions (assuming CPT conservation this implies C-conjugation non-conservation). Limits on a T-odd/P-odd interaction follow from measurements of the electric dipole moment of the neutron ( -26 e.cm [95% C.L.]). It provides a limit on a T-odd/P-odd pion-nucleon coupling constant which is less than 10 -4 times the weak interaction strength. Experimental limits on a T-odd/P-even interaction are much less stringent. Following the standard approach of describing the nucleon-nucleon interaction in terms of meson exchanges, it can be shown that only charged ρ-meson exchange and A 1 -meson exchange can lead to a T-odd/P-even interaction. The better constraints stem from measurements of the electric dipole moment of the neutron and from measurements of charge-symmetry breaking in neutron-proton elastic scattering. The latter experiments were executed at TRIUMF (497 and 347 MeV) and at IUCF (183 MeV). All other experiments, like detailed balance experiments, polarization - analyzing power difference determinations, and five-fold correlation experiments with polarized incident nucleons and aligned nuclear targets, have been shown to be at least an order to magnitude less sensitive. Is there room for further experimentation?
Reis, Felipe; Machín, Leandro; Rosenthal, Amauri; Deliza, Rosires; Ares, Gastón
2016-12-01
People do not usually process all the available information on packages for making their food choices and rely on heuristics for making their decisions, particularly when having limited time. However, in most consumer studies encourage participants to invest a lot of time for making their choices. Therefore, imposing a time-constraint in consumer studies may increase their ecological validity. In this context, the aim of the present work was to evaluate the influence of a time-constraint on consumer evaluation of pomegranate/orange juice bottles using rating-based conjoint task. A consumer study with 100 participants was carried out, in which they had to evaluate 16 pomegranate/orange fruit juice bottles, differing in bottle design, front-of-pack nutritional information, nutrition claim and processing claim, and to rate their intention to purchase. Half of the participants evaluated the bottle images without time constraint and the other half had a time-constraint of 3s for evaluating each image. Eye-movements were recorded during the evaluation. Results showed that time-constraint when evaluating intention to purchase did not largely modify the way in which consumers visually processed bottle images. Regardless of the experimental condition (with or without time constraint), they tended to evaluate the same product characteristics and to give them the same relative importance. However, a trend towards a more superficial evaluation of the bottles that skipped complex information was observed. Regarding the influence of product characteristics on consumer intention to purchase, bottle design was the variable with the largest relative importance in both conditions, overriding the influence of nutritional or processing characteristics, which stresses the importance of graphic design in shaping consumer perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gürer, Derya; van Hinsbergen, Douwe J. J.; Özkaptan, Murat; Creton, Iverna; Koymans, Mathijs R.; Cascella, Antonio; Langereis, Cornelis G.
2018-03-01
To quantitatively reconstruct the kinematic evolution of Central and Eastern Anatolia within the framework of Neotethyan subduction accommodating Africa-Eurasia convergence, we paleomagnetically assess the timing and amount of vertical axis rotations across the Ulukışla and Sivas regions. We show paleomagnetic results from ˜ 30 localities identifying a coherent rotation of a SE Anatolian rotating block comprised of the southern Kırşehir Block, the Ulukışla Basin, the Central and Eastern Taurides, and the southern part of the Sivas Basin. Using our new and published results, we compute an apparent polar wander path (APWP) for this block since the Late Cretaceous, showing that it experienced a ˜ 30-35° counterclockwise vertical axis rotation since the Oligocene time relative to Eurasia. Sediments in the northern Sivas region show clockwise rotations. We use the rotation patterns together with known fault zones to argue that the counterclockwise-rotating domain of south-central Anatolia was bounded by the Savcılı Thrust Zone and Deliler-Tecer Fault Zone in the north and by the African-Arabian trench in the south, the western boundary of which is poorly constrained and requires future study. Our new paleomagnetic constraints provide a key ingredient for future kinematic restorations of the Anatolian tectonic collage.
Directory of Open Access Journals (Sweden)
D. Gürer
2018-03-01
Full Text Available To quantitatively reconstruct the kinematic evolution of Central and Eastern Anatolia within the framework of Neotethyan subduction accommodating Africa–Eurasia convergence, we paleomagnetically assess the timing and amount of vertical axis rotations across the Ulukışla and Sivas regions. We show paleomagnetic results from ∼ 30 localities identifying a coherent rotation of a SE Anatolian rotating block comprised of the southern Kırşehir Block, the Ulukışla Basin, the Central and Eastern Taurides, and the southern part of the Sivas Basin. Using our new and published results, we compute an apparent polar wander path (APWP for this block since the Late Cretaceous, showing that it experienced a ∼ 30–35° counterclockwise vertical axis rotation since the Oligocene time relative to Eurasia. Sediments in the northern Sivas region show clockwise rotations. We use the rotation patterns together with known fault zones to argue that the counterclockwise-rotating domain of south-central Anatolia was bounded by the Savcılı Thrust Zone and Deliler–Tecer Fault Zone in the north and by the African–Arabian trench in the south, the western boundary of which is poorly constrained and requires future study. Our new paleomagnetic constraints provide a key ingredient for future kinematic restorations of the Anatolian tectonic collage.
Ee, Mong Shan; Yeoh, William; Boo, Yee Ling; Boulter, Terry
2018-01-01
Time control plays a critical role within the online mastery learning (OML) approach. This paper examines the two commonly implemented mastery learning strategies--personalised system of instructions and learning for mastery (LFM)--by focusing on what occurs when there is an instructional time constraint. Using a large data set from a postgraduate…
Towards optimized suppression of dephasing in systems subject to pulse timing constraints
International Nuclear Information System (INIS)
Hodgson, Thomas E.; D'Amico, Irene; Viola, Lorenza
2010-01-01
We investigate the effectiveness of different dynamical decoupling protocols for storage of a single qubit in the presence of a purely dephasing bosonic bath, with emphasis on comparing quantum coherence preservation under uniform versus nonuniform delay times between pulses. In the limit of instantaneous bit-flip pulses, this is accomplished by establishing a different representation of the controlled qubit evolution, where the decoherence behavior after an arbitrary number of pulses is directly expressed in terms of the uncontrolled decoherence function. In particular, analytical expressions are obtained for approximation of the long- and short-term coherence behavior for both Ohmic and supra-Ohmic environments. By focusing on the realistic case of pure dephasing in an excitonic qubit, we quantitatively assess the impact of physical constraints on achievable pulse separations, and show that little advantage of high-level decoupling schemes based on concatenated or optimal design may be expected if pulses cannot be applied sufficiently fast. In such constrained scenarios, we demonstrate how simple modifications of repeated periodic-echo protocols can offer significantly improved coherence preservation in realistic parameter regimes. We expect similar conclusions to be relevant to other constrained qubit devices exposed to quantum or classical phase noise.
Adjustment to subtle time constraints and power law learning in rapid serial visual presentation
Directory of Open Access Journals (Sweden)
Jacqueline Chakyung Shin
2015-11-01
Full Text Available We investigated whether attention could be modulated through the implicit learning of temporal information in a rapid serial visual presentation (RSVP task. Participants identified two target letters among numeral distractors. The stimulus-onset asynchrony immediately following the first target (SOA1 varied at three levels (70, 98, and 126 ms randomly between trials or fixed within blocks of trials. Practice over three consecutive days resulted in a continuous improvement in the identification rate for both targets and attenuation of the attentional blink (AB, a decrement in target (T2 identification when presented 200-400 ms after another target (T1. Blocked SOA1s led to a faster rate of improvement in RSVP performance and more target order reversals relative to random SOA1s, suggesting that the implicit learning of SOA1 positively affected performance. The results also reveal power law learning curves for individual target identification as well as the reduction in the AB decrement. These learning curves reflect the spontaneous emergence of skill through subtle attentional modulations rather than general attentional distribution. Together, the results indicate that implicit temporal learning could improve high level and rapid cognitive processing and highlights the sensitivity and adaptability of the attentional system to subtle constraints in stimulus timing.
International Nuclear Information System (INIS)
Joly, L.; Andriot, C.
1995-01-01
In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs
Energy Technology Data Exchange (ETDEWEB)
Joly, L.; Andriot, C.
1995-12-31
In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs.
Beyond the sticker price: including and excluding time in comparing food prices.
Yang, Yanliang; Davis, George C; Muth, Mary K
2015-07-01
An ongoing debate in the literature is how to measure the price of food. Most analyses have not considered the value of time in measuring the price of food. Whether or not the value of time is included in measuring the price of a food may have important implications for classifying foods based on their relative cost. The purpose of this article is to compare prices that exclude time (time-exclusive price) with prices that include time (time-inclusive price) for 2 types of home foods: home foods using basic ingredients (home recipes) vs. home foods using more processed ingredients (processed recipes). The time-inclusive and time-exclusive prices are compared to determine whether the time-exclusive prices in isolation may mislead in drawing inferences regarding the relative prices of foods. We calculated the time-exclusive price and time-inclusive price of 100 home recipes and 143 processed recipes and then categorized them into 5 standard food groups: grains, proteins, vegetables, fruit, and dairy. We then examined the relation between the time-exclusive prices and the time-inclusive prices and dietary recommendations. For any food group, the processed food time-inclusive price was always less than the home recipe time-inclusive price, even if the processed food's time-exclusive price was more expensive. Time-inclusive prices for home recipes were especially higher for the more time-intensive food groups, such as grains, vegetables, and fruit, which are generally underconsumed relative to the guidelines. Focusing only on the sticker price of a food and ignoring the time cost may lead to different conclusions about relative prices and policy recommendations than when the time cost is included. © 2015 American Society for Nutrition.
Gash, V.
2008-01-01
This article investigates whether women work part-time through preference or constraint and argues that different countries provide different opportunities for preference attainment. It argues that women with family responsibilities are unlikely to have their working preferences met without national policies supportive of maternal employment. Using event history analysis the article tracks part-time workers' transitions to both full-time employment and to labour market drop-out.The article co...
Directory of Open Access Journals (Sweden)
FRANCISCO ABOITIZ
2003-01-01
Full Text Available Analysis of corpus callosum fiber composition reveals that inter-hemispheric transmission time may put constraints on the development of inter-hemispheric synchronic ensembles, especially in species with large brains like humans. In order to overcome this limitation, a subset of large-diameter callosal fibers are specialized for fast inter-hemispheric transmission, particularly in large-brained species. Nevertheless, the constraints on fast inter-hemispheric communication in large-brained species can somehow contribute to the development of ipsilateral, intrahemispheric networks, which might promote the development of brain lateralization.
Ton, Riccardo; Martin, Thomas E.
2017-01-01
The relative importance of intrinsic constraints imposed by evolved physiological trade-offs versus the proximate effects of temperature for interspecific variation in embryonic development time remains unclear. Understanding this distinction is important because slow development due to evolved trade-offs can yield phenotypic benefits, whereas slow development from low temperature can yield costs. We experimentally increased embryonic temperature in free-living tropical and north temperate songbird species to test these alternatives. Warmer temperatures consistently shortened development time without costs to embryo mass or metabolism. However, proximate effects of temperature played an increasingly stronger role than intrinsic constraints for development time among species with colder natural incubation temperatures. Long development times of tropical birds have been thought to primarily reflect evolved physiological trade-offs that facilitate their greater longevity. In contrast, our results indicate a much stronger role of temperature in embryonic development time than currently thought.
Directory of Open Access Journals (Sweden)
Francis Jillian J
2009-11-01
Full Text Available Abstract Background Behavioural approaches to knowledge translation inform interventions to improve healthcare. However, such approaches often focus on a single behaviour without considering that health professionals perform multiple behaviours in pursuit of multiple goals in a given clinical context. In resource-limited consultations, performing these other goal-directed behaviours may influence optimal performance of a particular evidence-based behaviour. This study aimed to investigate whether a multiple goal-directed behaviour perspective might inform implementation research beyond single-behaviour approaches. Methods We conducted theory-based semi-structured interviews with 12 general medical practitioners (GPs in Scotland on their views regarding two focal clinical behaviours--providing physical activity (PA advice and prescribing to reduce blood pressure (BP to Results Most GPs reported strong intention to prescribe to reduce BP but expressed reasons why they would not. Intention to provide PA advice was variable. Most GPs reported that time constraints and patient preference detrimentally affected their control over providing PA advice and prescribing to reduce BP, respectively. Most GPs perceived many of their other goal-directed behaviours as interfering with providing PA advice, while fewer GPs reported goal-directed behaviours that interfere with prescribing to reduce BP. Providing PA advice and prescribing to reduce BP were perceived to be facilitated by similar diabetes-related behaviours (e.g., discussing cholesterol. While providing PA advice was perceived to be mainly facilitated by providing other lifestyle-related clinical advice (e.g., talking about weight, BP prescribing was reported as facilitated by pursuing ongoing standard consultation-related goals (e.g., clearly structuring the consultation. Conclusion GPs readily relate their other goal-directed behaviours with having a facilitating and interfering influence on their
Rigid Body Time Integration by Convected Base Vectors with Implicit Constraints
DEFF Research Database (Denmark)
Krenk, Steen; Nielsen, Martin Bjerre
2013-01-01
of the kinetic energy used in the present formulation is deliberately chosen to correspond to a rigid body rotation, and the orthonormality constraints are introduced via the equivalent Green strain components of the base vectors. The particular form of the extended inertia tensor used here implies a set...
The Time-Course of Morphological Constraints: Evidence from Eye-Movements during Reading
Cunnings, Ian; Clahsen, Harald
2007-01-01
Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater…
Real-Time Optimization of a maturing North Sea gas asset with production constraints
Linden, R.J.P. van der; Busking, T.E.
2013-01-01
As gas and oil fields mature their operation becomes increasingly more complex, due to complex process dynamics, like slugging, gas coning, water breakthrough, salt or hydrate deposition. Moreover these phenomena also lead to production constraints in the upstream facilities. This complexity asks
Mitton, Craig; Levy, Adrian; Gorsky, Diane; MacNeil, Christina; Dionne, Francois; Marrie, Tom
2013-07-01
Facing a projected $1.4M deficit on a $35M operating budget for fiscal year 2011/2012, members of the Dalhousie University Faculty of Medicine developed and implemented an explicit, transparent, criteria-based priority setting process for resource reallocation. A task group that included representatives from across the Faculty of Medicine used a program budgeting and marginal analysis (PBMA) framework, which provided an alternative to the typical public-sector approaches to addressing a budget deficit of across-the-board spending cuts and political negotiation. Key steps to the PBMA process included training staff members and department heads on priority setting and resource reallocation, establishing process guidelines to meet immediate and longer-term fiscal needs, developing a reporting structure and forming key working groups, creating assessment criteria to guide resource reallocation decisions, assessing disinvestment proposals from all departments, and providing proposal implementation recommendations to the dean. All departments were required to submit proposals for consideration. The task group approved 27 service reduction proposals and 28 efficiency gains proposals, totaling approximately $2.7M in savings across two years. During this process, the task group faced a number of challenges, including a tight timeline for development and implementation (January to April 2011), a culture that historically supported decentralized planning, at times competing interests (e.g., research versus teaching objectives), and reductions in overall health care and postsecondary education government funding. Overall, faculty and staff preferred the PBMA approach to previous practices. Other institutions should use this example to set priorities in times of fiscal constraints.
State control of discrete-time linear systems to be bound in state variables by equality constraints
International Nuclear Information System (INIS)
Filasová, Anna; Krokavec, Dušan; Serbák, Vladimír
2014-01-01
The paper is concerned with the problem of designing the discrete-time equivalent PI controller to control the discrete-time linear systems in such a way that the closed-loop state variables satisfy the prescribed equality constraints. Since the problem is generally singular, using standard form of the Lyapunov function and a symmetric positive definite slack matrix, the design conditions are proposed in the form of the enhanced Lyapunov inequality. The results, offering the conditions of the control existence and the optimal performance with respect to the prescribed equality constraints for square discrete-time linear systems, are illustrated with the numerical example to note effectiveness and applicability of the considered approach
International Nuclear Information System (INIS)
Akaoka, Katsuaki; Maruyama, Youichiro; Oba, Masaki; Miyabe, Masabumi; Otobe, Haruyoshi; Wakaida, Ikuo
2010-05-01
For the remote analysis of low DF TRU (Decontamination Factor Transuranic) fuel, Laser Breakdown Spectroscopy (LIBS) was applied to uranium oxide including a small amount of calcium oxide. The characteristics, such as spectrum intensity and plasma excitation temperature, were measured using time-resolved spectroscopy. As a result, in order to obtain the stable intensity of calcium spectrum for the uranium spectrum, it was found out that the optimum observation delay time of spectrum is 4 microseconds or more after laser irradiation. (author)
Linguistic embodiment and verbal constraints: human cognition and the scales of time
DEFF Research Database (Denmark)
Cowley, Stephen
2014-01-01
Using radical embodied cognitive science, the paper offers the hypothesis that language is symbiotic: its agent-environment dynamics arise as linguistic embodiment is managed under verbal constraints. As a result, co-action grants human agents the ability to use a unique form of phenomenal......, linguistic symbiosis grants access to diachronic resources. On this distributed-ecological view, language can thus be redefined as: “activity in which wordings play a part.”...
Providing reliable energy in a time of constraints : a North American concern
International Nuclear Information System (INIS)
Egan, T.; Turk, E.
2008-04-01
The reliability of the North American electricity grid was discussed. Government initiatives designed to control carbon dioxide (CO 2 ) and other emissions in some regions of Canada may lead to electricity supply constraints in other regions. A lack of investment in transmission infrastructure has resulted in constraints within the North American transmission grid, and the growth of smaller projects is now raising concerns about transmission capacity. Labour supply shortages in the electricity industry are also creating concerns about the long-term security of the electricity market. Measures to address constraints must be considered in the current context of the North American electricity system. The extensive transmission interconnects and integration between the United States and Canada will provide a framework for greater trade and market opportunities between the 2 countries. Coordinated actions and increased integration will enable Canada and the United States to increase the reliability of electricity supply. However, both countries must work cooperatively to increase generation supply using both mature and emerging technologies. The cross-border transmission grid must be enhanced by increasing transmission capacity as well as by implementing new reliability rules, building new infrastructure, and ensuring infrastructure protection. Barriers to cross-border electricity trade must be identified and avoided. Demand-side and energy efficiency measures must also be implemented. It was concluded that both countries must focus on developing strategies for addressing the environmental concerns related to electricity production. 6 figs
Directory of Open Access Journals (Sweden)
Wei Huang
2017-03-01
Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.
International Nuclear Information System (INIS)
Seljak, Uros; Makarov, Alexey; McDonald, Patrick; Anderson, Scott F.; Bahcall, Neta A.; Cen, Renyue; Gunn, James E.; Lupton, Robert H.; Schlegel, David J.; Brinkmann, J.; Burles, Scott; Doi, Mamoru; Ivezic, Zeljko; Kent, Stephen; Loveday, Jon; Munn, Jeffrey A.; Nichol, Robert C.; Ostriker, Jeremiah P.; Schneider, Donald P.; Berk, Daniel E. Vanden
2005-01-01
We combine the constraints from the recent Lyα forest analysis of the Sloan Digital Sky Survey (SDSS) and the SDSS galaxy bias analysis with previous constraints from SDSS galaxy clustering, the latest supernovae, and 1st year WMAP cosmic microwave background anisotropies. We find significant improvements on all of the cosmological parameters compared to previous constraints, which highlights the importance of combining Lyα forest constraints with other probes. Combining WMAP and the Lyα forest we find for the primordial slope n s =0.98±0.02. We see no evidence of running, dn/dlnk=-0.003±0.010, a factor of 3 improvement over previous constraints. We also find no evidence of tensors, r 2 model is within the 2-sigma contour, V∝φ 4 is outside the 3-sigma contour. For the amplitude we find σ 8 =0.90±0.03 from the Lyα forest and WMAP alone. We find no evidence of neutrino mass: for the case of 3 massive neutrino families with an inflationary prior, eV and the mass of lightest neutrino is m 1 ν λ =0.72±0.02, w(z=0.3)=-0.98 -0.12 +0.10 , the latter changing to w(z=0.3)=-0.92 -0.10 +0.09 if tensors are allowed. We find no evidence for variation of the equation of state with redshift, w(z=1)=-1.03 -0.28 +0.21 . These results rely on the current understanding of the Lyα forest and other probes, which need to be explored further both observationally and theoretically, but extensive tests reveal no evidence of inconsistency among different data sets used here
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi
2014-08-01
This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.
Directory of Open Access Journals (Sweden)
Shunji Sako
2014-08-01
Full Text Available Objectives: This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. Material and Methods: The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1 the distal position (DP, in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2 the proximal position (PP, in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses, operating efficiencies (based on word counts, and fatigue levels (based on the visual analog scale – VAS. Results: Oxygen consumption (VO2, the ratio of inspiration time to respiration time (Ti/Ttotal, respiratory rate (RR, minute ventilation (VE, and the ratio of expiration to inspiration (Te/Ti were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT, carbon dioxide output rates (VCO2/VE, and oxygen extraction fractions (VO2/VE were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Conclusions: Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when
Loo, Sok Hiang Candy
2014-01-01
Approved for public release; distribution is unlimited Organizations in Singapore operate in a highly competitive and fast-paced work environment that presents decision-making challenges at the individual, group, and organization levels. A key problem is achieving good decision fitness within time and cost constraints. While many decision-making theories and processes address the fundamental decision-making process, there is limited research on improving the group decision-making framework...
Scriber, J Mark; Elliot, Ben; Maher, Emily; McGuire, Molly; Niblack, Marjie
2014-01-21
Adaptations to "thermal time" (=Degree-day) constraints on developmental rates and voltinism for North American tiger swallowtail butterflies involve most life stages, and at higher latitudes include: smaller pupae/adults; larger eggs; oviposition on most nutritious larval host plants; earlier spring adult emergences; faster larval growth and shorter molting durations at lower temperatures. Here we report on forewing sizes through 30 years for both the northern univoltine P. canadensis (with obligate diapause) from the Great Lakes historical hybrid zone northward to central Alaska (65° N latitude), and the multivoltine, P. glaucus from this hybrid zone southward to central Florida (27° N latitude). Despite recent climate warming, no increases in mean forewing lengths of P. glaucus were observed at any major collection location (FL to MI) from the 1980s to 2013 across this long latitudinal transect (which reflects the "converse of Bergmann's size Rule", with smaller females at higher latitudes). Unlike lower latitudes, the Alaska, Ontonogon, and Chippewa/Mackinac locations (for P. canadensis) showed no significant increases in D-day accumulations, which could explain lack of size change in these northernmost locations. As a result of 3-4 decades of empirical data from major collection sites across these latitudinal clines of North America, a general "voltinism/size/D-day" model is presented, which more closely predicts female size based on D-day accumulations, than does latitude. However, local "climatic cold pockets" in northern Michigan and Wisconsin historically appeared to exert especially strong size constraints on female forewing lengths, but forewing lengths quickly increased with local summer warming during the recent decade, especially near the warming edges of the cold pockets. Results of fine-scale analyses of these "cold pockets" are in contrast to non-significant changes for other Papilio populations seen across the latitudinal transect for P. glaucus
Taylor, Stephen R; Simon, Joseph; Sampson, Laura
2017-05-05
We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.
Time constraints on the tectonic evolution of the eastern Sierras Pampeanas (Central Argentina)
DEFF Research Database (Denmark)
Siegesmund, Siegfried; Steenken, A; Martino, R D
2010-01-01
The application of the SHRIMP U/Pb dating technique to zircon and monazite of different rock types of the Sierras de Córdoba provides an important insight into the metamorphic history of the basement domains. Additional constraints on the Pampean metamorphic episode were gained by Pb/Pb stepwise...... leaching (PbSL) experiments on two titanite and garnet separates. Results indicate that the metamorphic history recorded by Crd-free gneisses (M2) started in the latest Neoproterozoic/earliest Cambrian (553 and 543 Ma) followed by the M4 metamorphism at ~530 Ma that is documented in the diatexites. Zircon...
Cost and benefit including value of life, health and environmental damage measured in time units
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Friis-Hansen, Peter
2009-01-01
Key elements of the authors' work on money equivalent time allocation to costs and benefits in risk analysis are put together as an entity. This includes the data supported dimensionless analysis of an equilibrium relation between total population work time and gross domestic product leading...... of this societal value over the actual costs, used by the owner for economically optimizing an activity, motivates a simple risk accept criterion suited to be imposed on the owner by the public. An illustration is given concerning allocation of economical means for mitigation of loss of life and health on a ferry...
Time-dependent shock acceleration of energetic electrons including synchrotron losses
International Nuclear Information System (INIS)
Fritz, K.; Webb, G.M.
1990-01-01
The present investigation of the time-dependent particle acceleration problem in strong shocks, including synchrotron radiation losses, solves the transport equation analytically by means of Laplace transforms. The particle distribution thus obtained is then transformed numerically into real space for the cases of continuous and impulsive injections of particles at the shock. While in the continuous case the steady-state spectrum undergoes evolution, impulsive injection is noted to yield such unpredicted features as a pile-up of high-energy particles or a steep power-law with time-dependent spectral index. The time-dependent calculations reveal varying spectral shapes and more complex features for the higher energies which may be useful in the interpretation of outburst spectra. 33 refs
Directory of Open Access Journals (Sweden)
Hamidreza Haddad
2012-04-01
Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.
Constraints on the timing of the Moon-forming giant impact from MORB Xe isotopes
Parai, R.; Mukhopadhyay, S.
2014-12-01
As Earth accreted, volatiles were delivered by accreting material and lost by degassing and impact-driven ejection to space. The Moon-forming giant impact initiated the final catastrophic outgassing and bulk volatile ejection event on the early Earth. I-Pu-U-Xe systematics provide a powerful tool to probe degassing of the early Earth. Radiogenic 129Xe was produced by β-decay of the extinct nuclide 129I (t1/2 = 15.7 Ma) in the first ~90 Myr of Earth history. Fissiogenic 131Xe, 132Xe, 134Xe, 136Xe were produced in distinct, characteristic proportions by the fission of extinct short-lived 244Pu (t1/2 = 80.0 Myr) and extant long-lived 238U (t1/2 = 4.468 Gyr). Here we present radiogenic and fission Xe data in basalts from the Southwest Indian Ridge, and discuss them with other mantle-derived samples to shed light on early Earth volatile accretion and loss. Based on the ratio of radiogenic 129Xe to plutogenic 136Xe determined for the MORB source, we calculate an I-Pu-Xe closure age for the upper mantle of ~44-70 Myr after the start of the Solar System. The closure age should correspond to the end of catastrophic mantle outgassing during accretion, and thus constrains the age of the last giant impact (LGI). Our closure age is significantly older than previous Xe closure age determinations of ~100 Myr, and is also older than some direct radiometric ages of lunar crustal samples. In order to explore the effects of accretion timescales, partial early retention of Xe, and degassing associated with long-term mantle processing on Xe closure age, we develop a new model of I-Pu-U-Xe systematics. We find that for LGI's between ~35 and 70 Myr after the start of the Solar System, we are able to satisfy constraints on I-Pu-U-Xe systematics simultaneously without invoking partial retention of Xe prior to the last giant impact. For LGI's after ~80 Myr, partial retention of Xe prior to the LGI is required. Non-zero early retention of Xe is necessary to explain the budgets of primordial
International Nuclear Information System (INIS)
Dossett, Jason N.; Moldenhauer, Jacob; Ishak, Mustapha
2011-01-01
We use cosmological constraints from current data sets and a figure of merit approach in order to probe any deviations from general relativity at cosmological scales. The figure of merit approach is used to study and compare the constraining power of various combinations of data sets on the modified gravity (MG) parameters. We use the recently refined HST-COSMOS weak-lensing tomography data, the ISW-galaxy cross correlations from 2MASS and SDSS luminous red galaxy surveys, the matter power spectrum from SDSS-DR7 (MPK), the WMAP7 temperature and polarization spectra, the baryon acoustic oscillations from Two-Degree Field and SDSS-DR7, and the Union2 compilation of type Ia supernovae, in addition to other bounds from Hubble parameter measurements and big bang nucleosynthesis. We use three parametrizations of MG parameters that enter the perturbed field equations. In order to allow for variations of the parameters with the redshift and scale, the first two parametrizations use recently suggested functional forms while the third is based on binning methods. Using the first parametrization, we find that the CMB+ISW+WL combination provides the strongest constraints on the MG parameters followed by CMB+WL or CMB+MPK+ISW. Using the second parametrization or the binning methods, we find that the combination CMB+MPK+ISW consistently provides some of the strongest constraints. This shows that the constraints are parametrization dependent. We find that adding up current data sets does not improve consistently the uncertainties on MG parameters due to tensions between the best-fit MG parameters preferred by different data sets. Furthermore, some functional forms imposed by the parametrizations can lead to an exacerbation of these tensions. Next, unlike some studies that used the CFHTLS lensing data, we do not find any deviation from general relativity using the refined HST-COSMOS data, confirming previous claims in those studies that their result may have been due to some
International Nuclear Information System (INIS)
Goswami, Gurupada; Mukherjee, Biswajit; Bag, Bidhan Chandra
2005-01-01
We have studied the relaxation of non-Markovian and thermodynamically closed system both in the absence and presence of non-equilibrium constraint in terms of the information entropy flux and entropy production based on the Fokker-Planck and the entropy balance equations. Our calculation shows how the relaxation time depends on noise correlation time. It also considers how the non-equilibrium constraint is affected by system parameters such as noise correlation time, strength of dissipation and frequency of dynamical system. The interplay of non-equilibrium constraint, frictional memory kernel, noise correlation time and frequency of dynamical system reveals the extremum nature of the entropy production
Goswami, Gurupada; Mukherjee, Biswajit; Bag, Bidhan Chandra
2005-06-01
We have studied the relaxation of non-Markovian and thermodynamically closed system both in the absence and presence of non-equilibrium constraint in terms of the information entropy flux and entropy production based on the Fokker-Planck and the entropy balance equations. Our calculation shows how the relaxation time depends on noise correlation time. It also considers how the non-equilibrium constraint is affected by system parameters such as noise correlation time, strength of dissipation and frequency of dynamical system. The interplay of non-equilibrium constraint, frictional memory kernel, noise correlation time and frequency of dynamical system reveals the extremum nature of the entropy production.
McMurtry, Gary M.; Herrero-Bervera, Emilio; Cremer, Maximilian D.; Smith, John R.; Resig, Johanna; Sherman, Clark; Torresan, Michael E.
1999-01-01
Previous work has found evidence for giant tsunami waves that impacted the coasts of Lanai, Molokai and other southern Hawaiian Islands, tentatively dated at 100 + and 200 + ka by U-series methods on uplifted coral clasts. Seafloor imaging and related work off Hawaii Island has suggested the Alika phase 2 debris avalanche as the source of the ~ 100 ka "giant wave deposits", although its precise age has been elusive. More recently, a basaltic sand bed in ODP site 842 (~ 300 km west of Hawaii) estimated at 100 ?? 20 ka has been suggested to correlate with this or another large Hawaiian landslide. Our approach to the timing and linkage of giant submarine landslides and paleo-tsunami deposits is a detailed stratigraphic survey of pelagic deposits proximal to the landslide feature, beginning with a suite of seven piston, gravity and box cores collected in the vicinity of the Alika 2 slide. We used U-series dating techniques, including excess 230Th and 210Pb profiling, high-resolution paleomagnetic stratigraphy, including continuous, U-channel analysis, δ18O stratigraphy, visual and X-ray sediment lithology, and the petrology and geochemistry of the included turbidites and ash layers. Minimum ages for the Alika phase 2a slide from detailed investigation of two of the cores are 112 ± 15 ka and 125 ± 24 ka (2σ) based on excess 230Th dating. A less precise age for the Alika phase 1 and/or South Kona slide is 242 ± 80 ka (2σ), consistent with previous geological estimates. Oxygen isotope analyses of entrained planktonic foraminifera better constrain the Alika phase 2a maximum age at 127 ± 5 ka, which corresponds to the beginning of the stage 5e interglacial period. It is proposed that triggering of these giant landslides may be related to climate change when wetter periods increase the possibility of groundwater intrusion and consequent phreatomagmatic eruptions of shallow magma chambers. Our study indicates the contemporaneity of the Alika giant submarine landslides
Linguistic embodiment and verbal constraints: human cognition and the scales of time
Cowley, Stephen J.
2014-01-01
Using radical embodied cognitive science, the paper offers the hypothesis that language is symbiotic: its agent-environment dynamics arise as linguistic embodiment is managed under verbal constraints. As a result, co-action grants human agents the ability to use a unique form of phenomenal experience. In defense of the hypothesis, I stress how linguistic embodiment enacts thinking: accordingly, I present auditory and acoustic evidence from 750 ms of mother-daughter talk, first, in fine detail and, then, in narrative mode. As the parties attune, they use a dynamic field to co-embody speech with experience of wordings. The latter arise in making and tracking phonetic gestures that, crucially, mesh use of artifice, cultural products and impersonal experience. As observers, living human beings gain dispositions to display and use social subjectivity. Far from using brains to “process” verbal content, linguistic symbiosis grants access to diachronic resources. On this distributed-ecological view, language can thus be redefined as: “activity in which wordings play a part.” PMID:25324799
Real-Time Attitude Control Algorithm for Fast Tumbling Objects under Torque Constraint
Tsuda, Yuichi; Nakasuka, Shinichi
This paper describes a new control algorithm for achieving any arbitrary attitude and angular velocity states of a rigid body, even fast and complicated tumbling rotations, under some practical constraints. This technique is expected to be applied for the attitude motion synchronization to capture a non-cooperative, tumbling object in such missions as removal of debris from orbit, servicing broken-down satellites for repairing or inspection, rescue of manned vehicles, etc. For this objective, we have introduced a novel control algorithm called Free Motion Path Method (FMPM) in the previous paper, which was formulated as an open-loop controller. The next step of this consecutive work is to derive a closed-loop FMPM controller, and as the preliminary step toward the objective, this paper attempts to derive a conservative state variables representation of a rigid body dynamics. 6-Dimensional conservative state variables are introduced in place of general angular velocity-attitude angle representation, and how to convert between both representations are shown in this paper.
Ansari, Sardar; Molaei, Somayeh; Oldham, Kenn; Heung, Michael; Ward, Kevin R; Najarian, Kayvan
2017-07-01
Intradialytic hypotension (IDH) is the most common complication of hemodialysis, affecting 15-50% of all dialysis sessions. Previously, we had presented a non-invasive Polyvinylidene Fluoride (PVDF) based sensor in the form of a ring to measure vascular tone and we showed that the morphology of the signal can be utilized to predict IDH. This paper presents an approach for analyzing the PVDF signal using extended Kalman filter (EKF) and a synthetic model that has previously been used to model the ECG signal with Gaussian functions. Moreover, a novel approach for incorporating state inequality constraints into the EKF process using a gradient projection method is introduced. The taut string algorithm was first used to estimate the outline of the signal and remove it to highlight the reflection waves. Then, the EKF was used to characterize the morphology of the signal using Gaussian functions. The amplitudes of the Gaussian functions were used as features to train a classifier. The results indicated that the PPV and NPV for the prediction were 83.33% and 100%, respectively.
Planning virtual infrastructures for time critical applications with multiple deadline constraints
Wang, J.; Taal, A.; Martin, P.; Hu, Y.; Zhou, H.; Pang, J.; de Laat, C.; Zhao, Z.
2017-01-01
Executing time critical applications within cloud environments while satisfying execution deadlines and response time requirements is challenging due to the difficulty of securing guaranteed performance from the underlying virtual infrastructure. Cost-effective solutions for hosting such
Time constraints for post-LGM landscape response to deglaciation in Val Viola, Central Italian Alps
Scotti, Riccardo; Brardinoni, Francesco; Crosta, Giovanni Battista; Cola, Giuseppe; Mair, Volkmar
2017-12-01
Across the northern European Alps, a long tradition of Quaternary studies has constrained post-LGM (Last Glacial Maximum) landscape history. The same picture remains largely unknown for the southern portion of the orogen. In this work, starting from existing 10Be exposure dating of three boulders in Val Viola, Central Italian Alps, we present the first detailed, post-LGM reconstruction of landscape (i.e., glacial, periglacial and paraglacial) response south of the Alpine divide. We pursue this task through Schmidt-hammer exposure-age dating (SHD) at 34 sites including moraines, rock glaciers, protalus ramparts, rock avalanche deposits and talus cones. In addition, based on the mapping of preserved moraines and on the numerical SHD ages, we reconstruct the glacier extent of four different stadials, including Egesen I (13.1 ± 1.1 ka), Egesen II (12.3 ± 0.6 ka), Kartell (11.0 ± 1.4 ka) and Kromer (9.7 ± 1.4 ka), whose chronologies agree with available counterparts from north of the Alpine divide. Results show that Equilibrium Line Altitude depressions (ΔELAs) associated to Younger Dryas and Early Holocene stadials are smaller than documented at most available sites in the northern Alps. These findings not only support the hypothesis of a dominant north westerly atmospheric circulation during the Younger Dryas, but also suggest that this pattern could have lasted until the Early Holocene. SHD ages on rock glaciers and protalus ramparts indicate that favourable conditions to periglacial landform development occurred during the Younger Dryas (12.7 ± 1.1 ka), on the valley slopes above the glacier, as well as in newly de-glaciated areas, during the Early Holocene (10.7 ± 1.3 and 8.8 ± 1.8 ka). The currently active rock glacier started to develop before 3.7 ± 0.8 ka and can be associated to the Löbben oscillation. Four of the five rock avalanches dated in Val Viola cluster within the Early Holocene, in correspondence of an atmospheric warming phase. By contrast
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
Energy Technology Data Exchange (ETDEWEB)
Tavassoli, A.A.
1986-10-01
Dislocation substructures formed in austenitic stainless steel 304L and 316L, fatigued at 673 K, 823 K and 873 K under total imposed strain ranges of 0.7 to 2.25%, and their correlation with mechanical properties have been investigated. In addition substructures formed at lower strain ranges have been examined using foils prepared from parts of the specimens with larger cross-sections. Investigation has also been extended to include the effect of intermittent hold-times up to 1.8 x 10/sup 4/s and sequential creep-fatigue and fatigue-creep. The experimental results obtained are analysed and their implications for current dislocation concepts and mechanical properties are discussed.
Directory of Open Access Journals (Sweden)
Ruoyu Luo
Full Text Available Due to the complexity of biological systems, simulation of biological networks is necessary but sometimes complicated. The classic stochastic simulation algorithm (SSA by Gillespie and its modified versions are widely used to simulate the stochastic dynamics of biochemical reaction systems. However, it has remained a challenge to implement accurate and efficient simulation algorithms for general reaction schemes in growing cells. Here, we present a modeling and simulation tool, called 'GeneCircuits', which is specifically developed to simulate gene-regulation in exponentially growing bacterial cells (such as E. coli with overlapping cell cycles. Our tool integrates three specific features of these cells that are not generally included in SSA tools: 1 the time delay between the regulation and synthesis of proteins that is due to transcription and translation processes; 2 cell cycle-dependent periodic changes of gene dosage; and 3 variations in the propensities of chemical reactions that have time-dependent reaction rates as a consequence of volume expansion and cell division. We give three biologically relevant examples to illustrate the use of our simulation tool in quantitative studies of systems biology and synthetic biology.
Optimal Design and Real Time Implementation of Autonomous Microgrid Including Active Load
Directory of Open Access Journals (Sweden)
Mohamed A. Hassan
2018-05-01
Full Text Available Controller gains and power-sharing parameters are the main parameters affect the dynamic performance of the microgrid. Considering an active load to the autonomous microgrid, the stability problem will be more involved. In this paper, the active load effect on microgrid dynamic stability is explored. An autonomous microgrid including three inverter-based distributed generations (DGs with an active load is modeled and the associated controllers are designed. Controller gains of the inverters and active load as well as Phase Locked Loop (PLL parameters are optimally tuned to guarantee overall system stability. A weighted objective function is proposed to minimize the error in both measured active power and DC voltage based on time-domain simulations. Different AC and DC disturbances are applied to verify and assess the effectiveness of the proposed control strategy. The results demonstrate the potential of the proposed controller to enhance the microgrid stability and to provide efficient damping characteristics. Additionally, the proposed controller is compared with the literature to demonstrate its superiority. Finally, the microgrid considered has been established and implemented on real time digital simulator (RTDS. The experimental results validate the simulation results and approve the effectiveness of the proposed controllers to enrich the stability of the considered microgrid.
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte
2010-01-01
Since the hourly spot market price is available one day ahead in Denmark, the price could be transferred to the consumers and they may shift their loads from high price periods to the low price periods in order to save their energy costs. The optimal load response to a real-time electricity price...... and may represent the future of electricity markets in some ways, is chosen as the studied power system in this paper. A distribution system where wind power capacity is 126% of maximum loads is chosen as the study case. This paper presents a nonlinear load optimization method to real-time power price...... for demand side management in order to save the energy costs as much as possible. Simulation results show that the optimal load response to a real-time electricity price has some good impacts on power system constraints in a distribution system with high wind power penetrations....
Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks
Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M.
2017-12-01
In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices.
International Nuclear Information System (INIS)
Zhang Yunong; Li Zhan
2009-01-01
In this Letter, by following Zhang et al.'s method, a recurrent neural network (termed as Zhang neural network, ZNN) is developed and analyzed for solving online the time-varying convex quadratic-programming problem subject to time-varying linear-equality constraints. Different from conventional gradient-based neural networks (GNN), such a ZNN model makes full use of the time-derivative information of time-varying coefficient. The resultant ZNN model is theoretically proved to have global exponential convergence to the time-varying theoretical optimal solution of the investigated time-varying convex quadratic program. Computer-simulation results further substantiate the effectiveness, efficiency and novelty of such ZNN model and method.
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency
Beeble, Marisa L.; Bybee, Deborah; Sullivan, Cris M.
2010-01-01
This study examined the impact of resource constraints on the psychological well-being of survivors of intimate partner violence (IPV), testing whether resource constraints is one mechanism that partially mediates the relationship between IPV and women's well-being. Although within-woman changes in resource constraints did not mediate the…
DEFF Research Database (Denmark)
Hansen, Anders Dohn; Kolind, Esben; Clausen, Jens
2009-01-01
In this paper, we consider the Manpower Allocation Problem with Time Windows, Job-Teaming Constraints and a limited number of teams (m-MAPTWTC). Given a set of teams and a set of tasks, the problem is to assign to each team a sequential order of tasks to maximize the total number of assigned tasks....... Both teams and tasks may be restricted by time windows outside which operation is not possible. Some tasks require cooperation between teams, and all teams cooperating must initiate execution simultaneously. We present an IP-model for the problem, which is decomposed using Dantzig-Wolfe decomposition....... The problem is solved by column generation in a Branch-and-Price framework. Simultaneous execution of tasks is enforced by the branching scheme. To test the efficiency of the proposed algorithm, 12 realistic test instances are introduced. The algorithm is able to find the optimal solution in 11 of the test...
DEFF Research Database (Denmark)
Hansen, Anders Dohn; Kolind, Esben; Clausen, Jens
In this paper, we consider the Manpower Allocation Problem with Time Windows, Job-Teaming Constraints and a limited number of teams (m-MAPTWTC). Given a set of teams and a set of tasks, the problem is to assign to each team a sequential order of tasks to maximize the total number of assigned tasks....... Both teams and tasks may be restricted by time windows outside which operation is not possible. Some tasks require cooperation between teams, and all teams cooperating must initiate execution simultaneously. We present an IP-model for the problem, which is decomposed using Dantzig-Wolfe decomposition....... The problem is solved by column generation in a Branch-and-Price framework. Simultaneous execution of tasks is enforced by the branching scheme. To test the efficiency of the proposed algorithm, 12 realistic test instances are introduced. The algorithm is able to find the optimal solution in 11 of the test...
Directory of Open Access Journals (Sweden)
E. Khoury
2013-01-01
Full Text Available This paper deals with a gradually deteriorating system operating under an uncertain environment whose state is only known on a finite rolling horizon. As such, the system is subject to constraints. Maintenance actions can only be planned at imposed times called maintenance opportunities that are available on a limited visibility horizon. This system can, for example, be a commercial vehicle with a monitored critical component that can be maintained only in some specific workshops. Based on the considered system, we aim to use the monitoring data and the time-limited information for maintenance decision support in order to reduce its costs. We propose two predictive maintenance policies based, respectively, on cost and reliability criteria. Classical age-based and condition-based policies are considered as benchmarks. The performance assessment shows the value of the different types of information and the best way to use them in maintenance decision making.
Directory of Open Access Journals (Sweden)
David Bol
2011-01-01
Full Text Available Ultra-low-voltage operation improves energy efficiency of logic circuits by a factor of 10×, at the expense of speed, which is acceptable for applications with low-to-medium performance requirements such as RFID, biomedical devices and wireless sensors. However, in 65/45 nm CMOS, variability and short-channel effects significantly harm robustness and timing closure of ultra-low-voltage circuits by reducing noise margins and jeopardizing gate delays. The consequent guardband on the supply voltage to meet a reasonable manufacturing yield potentially ruins energy efficiency. Moreover, high leakage currents in these technologies degrade energy efficiency in case of long stand-by periods. In this paper, we review recently published techniques to design robust and energy-efficient ultra-low-voltage circuits in 65/45 nm CMOS under relaxed yet strict timing constraints.
International Nuclear Information System (INIS)
Le Gal La Salle, C.; Aquilina, L.; Fourre, E.; Jean-Baptiste, P.; Michelot, J.-L.; Roux, C.; Bugai, D.; Labasque, T.; Simonucci, C.; Van Meir, N.; Noret, A.; Bassot, S.; Dapoigny, A.; Baumier, D.
2012-01-01
Following the explosion of reactor 4 at the Chernobyl power plant in northern Ukraine in 1986, contaminated soil and vegetation were buried in shallow trenches dug directly on-site in an Aeolian sand deposit. These trenches are sources of radionuclide (RN) pollution. The objective of the present study is to provide constraints for the Chernobyl flow and RN transport models by characterising groundwater residence time. A radiochronometer 3 H/ 3 He method (t 1/2 = 12.3 a) and anthropogenic tracers including CFC and SF 6 are investigated along with the water mass natural tracers Na, Cl, 18 O and 2 H. The groundwater is stratified, as evidenced by Na and Cl concentrations and stable isotopes ( 18 O, 2 H). In the upper aeolian layer, the Na–Cl relationship corresponds to evapotranspiration of precipitation, while in the underlying alluvial layer, an increase in Na and Cl with depth suggests both water–rock interactions and mixing processes. The 3 H/ 3 He and CFC apparent groundwater ages increase with depth, ranging from ‘recent’ (1–3 a) at a 2 m depth below the groundwater table to much higher apparent ages of 50–60 a at 27 m depth below the groundwater table. Discrepancies in 3 H/ 3 He and CFC apparent ages (20–25 a and 3–10 a, respectively) were observed during the 2008 campaign at an intermediate depth immediately below the aeolian/alluvial sand limit, which were attributed to the complex water transfer processes. Extremely high SF 6 concentrations, well above equilibrium with the atmosphere and up to 1112 pptv, are attributed to significant contamination of the soils following the nuclear reactor explosion in 1986. The SF 6 concentration vs. the apparent groundwater ages agrees with this interpretation, as the high SF 6 concentrations are all more recent than 1985. The persistence of the SF 6 concentration suggests that SF 6 was introduced in the soil atmosphere and slowly integrated in the groundwater moving along the hydraulic gradient. The
Eisenberg, Michael B.
1994-01-01
Provides a summary of the papers in this issue that deal with electronic publishing. Highlights include the impact on publishers, authors, users, and librarians and information technologists; theoretical frameworks; practical applications and implications; and future possibilities. (Contains 15 references.) (LRW)
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
A consistent causality-based view on a timed process algebra including urgent interactions
Katoen, Joost P.; Latella, Diego; Langerak, Romanus; Brinksma, Hendrik; Bolognesi, Tommaso
1998-01-01
This paper discusses a timed variant of a process algebra akin to LOTOS, baptized UPA, in a causality-based setting. Two timed features are incorporated—a delay function which constrains the occurrence time of atomic actions and an urgency operator that forces (local or synchronized) actions to
Constraint Programming based Local Search for the Vehicle Routing Problem with Time Windows
Sala Reixach, Joan
2012-01-01
El projecte es centra en el "Vehicle Routing Problem with Time Windows". Explora i testeja un mètode basat en una formulació del problema en termes de programació de restriccions. Implementa un mètode de cerca local amb la capacitat de fer grans moviments anomenat "Large Neighbourhood Search".
DEFF Research Database (Denmark)
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...
Energy-efficient data collection in wireless sensor networks with time constraints
Mitici, M.A.; Goseling, Jasper; de Graaf, Maurits; Boucherie, Richardus J.
We consider the problem of retrieving a reliable estimate of an attribute from a wireless sensor network within a fixed time window and with minimum energy consumption for the sensors. The sensors are located in the plane according to some random spatial process. They perform energy harvesting and
3D time-dependent flow computations using a molecular stress function model with constraint release
DEFF Research Database (Denmark)
Rasmussen, Henrik Koblitz
2002-01-01
The numerical simulation of time dependent viscoelastic flow (in three dimensions) is of interest in connection with a variety of polymer processing operations. The application of the numerical simulation techniques is in the analysis and design of polymer processing problems. This is operations,......, such as thermoforming, blow moulding, compression moulding, gas-assisted injection moulding, simultaneous multi-component injection moulding....
Gürer, Derya; Van Hinsbergen, Douwe J.J.; Özkaptan, Murat; Creton, Iverna; Koymans, Mathijs R.; Cascella, Antonio; Langereis, Cornelis G.
2018-01-01
To quantitatively reconstruct the kinematic evolution of Central and Eastern Anatolia within the framework of Neotethyan subduction accommodating Africa-Eurasia convergence, we paleomagnetically assess the timing and amount of vertical axis rotations across the Uluklisla and Sivas regions. We show
Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking
International Nuclear Information System (INIS)
Moore, Douglas; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for the trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al
Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking
Energy Technology Data Exchange (ETDEWEB)
Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Ruan, Dan [Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2016-01-15
Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for the trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al
Patterns and timing of loess-paleosol transitions in Eurasia: Constraints for paleoclimate studies
Zeeden, Christian; Hambach, Ulrich; Obreht, Igor; Hao, Qingzhen; Abels, Hemmo A.; Veres, Daniel; Lehmkuhl, Frank; Gavrilov, Milivoj B.; Marković, Slobodan B.
2018-03-01
Loess-paleosol sequences are the most extensive terrestrial paleoclimate records in Europe and Asia documenting atmospheric circulation patterns, vegetation, and sedimentary dynamics in response to glacial-interglacial cyclicity. Between the two sides of the Eurasian continent, differences may exist in response and response times to glacial changes and finding these is essential to understand the climate systems of the northern hemisphere. Therefore, assessment of common patterns and regional differences in loess-paleosol sequences (LPS) is vital, but remains, however, uncertain. Another key to interpret these records is to constrain the mechanisms responsible for the formation and preservation of paleosols and loess layers in these paleoclimate archives. This study therefore compares LPS magnetic susceptibility records as proxies for paleosol formation intensity for selected sites from the central Chinese Loess Plateau and the Carpathian Basin in Europe over the last 440 kyr. Inconsistencies and crucial issues concerning the timing, correlation and paleoclimate potential of selected Eurasian LPS are outlined. Our comparison of Eurasian LPS shows generally similar patterns of paleosol formation, while highlighting several crucial differences. Especially for paleosols developed around 200 and 300 ka, the reported timing of soil formation differs by up to 30 ka. In addition, a drying and cooling trend over the last 300 ka has been documented in Europe, with no such evidence in the Asian records. The comparison shows that there is still uncertainty in defining the chronostratigraphic framework for these records on glacial-interglacial time scales in the order of 5-30 kyr for the last 440 ka. We argue that the baseline of the magnetic susceptibility proxy in loess from the Carpathian Basin is the most striking difference between European LPS and the Chinese Loess Plateau. In our opinion, many of the current timing/age differences may be overcome once a comparable
Operational definition of (brane-induced) space-time and constraints on the fundamental parameters
International Nuclear Information System (INIS)
Maziashvili, Michael
2008-01-01
First we contemplate the operational definition of space-time in four dimensions in light of basic principles of quantum mechanics and general relativity and consider some of its phenomenological consequences. The quantum gravitational fluctuations of the background metric that comes through the operational definition of space-time are controlled by the Planck scale and are therefore strongly suppressed. Then we extend our analysis to the braneworld setup with low fundamental scale of gravity. It is observed that in this case the quantum gravitational fluctuations on the brane may become unacceptably large. The magnification of fluctuations is not linked directly to the low quantum gravity scale but rather to the higher-dimensional modification of Newton's inverse square law at relatively large distances. For models with compact extra dimensions the shape modulus of extra space can be used as a most natural and safe stabilization mechanism against these fluctuations
Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.
Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven
2009-01-01
The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.
KARAKAYA, Murat; SEVİNÇ, Ender
2017-01-01
Recently using Unmanned Aerial Vehicles (UAVs) either for military or civilian purposes is getting popularity. However, UAVs have their own limitations which require adopted approaches to satisfy the Quality of Service (QoS) promised by the applications depending on effective use of UAVs. One of the important limitations of the UAVs encounter is the flight range. Most of the time, UAVs have very scarce energy resources and, thus, they have relatively short flight ranges. Besides, for the appl...
MASTER-ICATE constraints on the outburst time of OGLE-2012-NOVA-002
Levato, H.; Saffe, C.; Mallamaci, C.; Lopez, C.; Denisenko, F. Podest D.; Gorbovskoy, E.; Lipunov, V.; Balanutsa, P.; Tiurina, N.; Kornilov, V.; Belinski, A.; Shatskiy, N.; Chazov, V.; Kuznetsov, A.; Zimnukhov, D.; Krushinsky, V.; Zalozhnih, I.; Popov, A.; Bourdanov, A.; Punanova, A.; Ivanov, K.; Yazev, S.; Budnev, N.; Konstantinov, E.; Chuvalaev, O.; Poleshchuk, V.; Gress, O.; Parkhomenko, A.; Tlatov, A.; Dormidontov, D.; Senik, V.; Yurkov, V.; Sergienko, Y.; Varda, D.; Sinyakov, E.; Shumkov, V.; Shurpakov, S.; Podvorotny, P.
2012-10-01
MASTER-ICATE very wide field camera (72-mm f/1.2 lens + 11 Mpx CCD) located at Observatorio Astronomico Felix Aguilar (OAFA) near San Juan, Argentina, has observed the position of possible Nova OGLE-2012-NOVA-002 reported by L. Wyrzykowski et al. (ATel #4483) several times before 2012 May 20 and then again after 2012 July 03. MASTER-WFC is continuously imaging the areas of sky (24x16 sq. deg. field of view) with 5-sec unfiltered exposures.
Arentze, Theo; Ettema, D.F.; Timmermans, Harry
Existing theories and models in economics and transportation treat households’ decisions regarding allocation of time and income to activities as a resource-allocation optimization problem. This stands in contrast with the dynamic nature of day-by-day activity-travel choices. Therefore, in the
Rasouli, S.; Timmermans, H.J.P.
2014-01-01
This paper presents the results of a study, which simulates the effects of travel time delay on adaptations of planned activity-travel schedules. The activity generation and scheduling engine of the Albatross model system is applied to a fraction of the synthetic population of the Rotterdam region,
Sesana, Alberto; Haiman, Zoltán; Kocsis, Bence; Kelley, Luke Zoltan
2018-03-01
The advent of time domain astronomy is revolutionizing our understanding of the universe. Programs such as the Catalina Real-time Transient Survey (CRTS) or the Palomar Transient Factory (PTF) surveyed millions of objects for several years, allowing variability studies on large statistical samples. The inspection of ≈250 k quasars in CRTS resulted in a catalog of 111 potentially periodic sources, put forward as supermassive black hole binary (SMBHB) candidates. A similar investigation on PTF data yielded 33 candidates from a sample of ≈35 k quasars. Working under the SMBHB hypothesis, we compute the implied SMBHB merger rate and we use it to construct the expected gravitational wave background (GWB) at nano-Hz frequencies, probed by pulsar timing arrays (PTAs). After correcting for incompleteness and assuming virial mass estimates, we find that the GWB implied by the CRTS sample exceeds the current most stringent PTA upper limits by almost an order of magnitude. After further correcting for the implicit bias in virial mass measurements, the implied GWB drops significantly but is still in tension with the most stringent PTA upper limits. Similar results hold for the PTF sample. Bayesian model selection shows that the null hypothesis (whereby the candidates are false positives) is preferred over the binary hypothesis at about 2.3σ and 3.6σ for the CRTS and PTF samples respectively. Although not decisive, our analysis highlights the potential of PTAs as astrophysical probes of individual SMBHB candidates and indicates that the CRTS and PTF samples are likely contaminated by several false positives.
FPGA based image processing for optical surface inspection with real time constraints
Hasani, Ylber; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.
2015-02-01
Today, high-quality printing products like banknotes, stamps, or vouchers, are automatically checked by optical surface inspection systems. In a typical optical surface inspection system, several digital cameras acquire the printing products with fine resolution from different viewing angles and at multiple wavelengths of the visible and also near infrared spectrum of light. The cameras deliver data streams with a huge amount of image data that have to be processed by an image processing system in real time. Due to the printing industry's demand for higher throughput together with the necessity to check finer details of the print and its security features, the data rates to be processed tend to explode. In this contribution, a solution is proposed, where the image processing load is distributed between FPGAs and digital signal processors (DSPs) in such a way that the strengths of both technologies can be exploited. The focus lies upon the implementation of image processing algorithms in an FPGA and its advantages. In the presented application, FPGAbased image-preprocessing enables real-time implementation of an optical color surface inspection system with a spatial resolution of 100 μm and for object speeds over 10 m/s. For the implementation of image processing algorithms in the FPGA, pipeline parallelism with clock frequencies up to 150 MHz together with spatial parallelism based on multiple instantiations of modules for parallel processing of multiple data streams are exploited for the processing of image data of two cameras and three color channels. Due to their flexibility and their fast response times, it is shown that FPGAs are ideally suited for realizing a configurable all-digital PLL for the processing of camera line-trigger signals with frequencies about 100 kHz, using pure synchronous digital circuit design.
Chen, Hui-Ya; Wing, Alan M; Pratt, David
2006-04-01
Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.
International Nuclear Information System (INIS)
Bai, D.S.; Chun, Y.R.; Kim, J.G.
1995-01-01
This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated
An electronic system for simulation of neural networks with a micro-second real time constraint
International Nuclear Information System (INIS)
Chorti, Arsenia; Granado, Bertrand; Denby, Bruce; Garda, Patrick
2001-01-01
Neural networks implemented in hardware can perform pattern recognition very quickly, and as such have been used to advantage in the triggering systems of certain high energy physics experiments. Typically, time constants of the order of a few microseconds are required. In this paper, we present a new system. MAHARADJA, for evaluating MLP and RBF neural network paradigms in real time. The system is tested on a possible ATLAS muon triggering application suggested by the Tel Aviv ATLAS group, consisting of a 4-8-8-4 MLP which must be evaluated in 10 microseconds. The inputs to the net are dx/dz, x(z=0), dy/dz, and y(z=0), whereas the outputs give pt, tan(phi), sin(theta), and q, the charge. With a 10 MHz clock, MAHARADJA calculates the result in 6.8 microseconds; at 20 MHz, which is readily attainable, this would be reduced to only 3.4 microseconds. The system can also handle RBF networks with 3 different distance metrics (Euclidean, Manhattan and Mahalanobis), and can simulate any MLP of 10 hidden layers or less. The electronic implementation is with FPGA's, which can be optimized for a specific neural network because the number of processing elements can be modified
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
Control of the tokamak safety factor profile with time-varying constraints using MPC
International Nuclear Information System (INIS)
Maljaars, E.; Felici, F.; De Baar, M.R.; Geelen, P.J.M.; Steinbuch, M.; Van Dongen, J.; Hogeweij, G.M.D.
2015-01-01
A controller is designed for the tokamak safety factor profile that takes real-time-varying operational and physics limits into account. This so-called model predictive controller (MPC) employs a prediction model in order to compute optimal control inputs that satisfy the given limits. The use of linearized models around a reference trajectory results in a quadratic programming problem that can easily be solved online. The performance of the controller is analysed in a set of ITER L-mode scenarios simulated with the non-linear plasma transport code RAPTOR. It is shown that the controller can reduce the tracking error due to an overestimation or underestimation of the modelled transport, while making a trade-off between residual error and amount of controller action. It is also shown that the controller can account for a sudden decrease in the available actuator power, while providing warnings ahead of time about expected violations of operational and physics limits. This controller can be extended and implemented in existing tokamaks in the near future. (paper)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
A time-resolved model of the mesospheric Na layer: constraints on the meteor input function
Directory of Open Access Journals (Sweden)
J. M. C. Plane
2004-01-01
Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.
Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei
2018-05-10
The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Almeida, M.E.; Ferreira, A.L; Macambira, M.J.B.; Sachett, C.R
2001-01-01
During long time the Jacareacanga meta-volcano sedimentary sequence have been interpreted as Archean greenstone belt terrain. However, recent data are indicating younger U-Pb ages about 2.1 Ga. In the Tapajos Province (Amazon Craton), the Cuiu-Cuiu Complex (2.00-2.03 Ga), Creporizao granitoids (1.99-1.96 Ga) and Jacareacanga Group are the oldest rocks. The Jacareancaga Group is composed by quartzmica schists, quartzites, ferruginous quartzite, metachert, and minor talc-tremolite-chlorite schist, actinolite-epidote schist, hornfels, metargilites and metawackes metamorphosed in low to medium-grade conditions. The aim of the present paper is to estabilish the maximum age of Jacareacanga sedimentation and identify probable sources in Espirito Santo region (Espirito Santo muscovite-biotite schist). In this research, similar and new results are obtained by zircon evaporation methodology. This research shows geochronological data about the Espirito Santo muscovite-biotite schist related to Jacareacanga Group (Ferreira et al., 2000) in Tapajos Province (Amazon Craton). The area is located near Amazonas and Para States boundary (Northern Brazil) and the sample was obtained at Espirito Santo (garimpo) small-scale gold mine (06 o 00min.48seg.S,58 o 08min.17seg.W) (au)
Bistacchi, A.; Pisterna, R.; Romano, V.; Rust, D.; Tibaldi, A.
2009-04-01
The plumbing system that connects a sub-volcanic magma reservoir to the surface has been the object of field characterization and mechanical modelling efforts since the pioneering work by Anderson (1936), who produced a detailed account of the spectacular Cullin Cone-sheet Complex (Isle of Skye, UK) and a geometrical and mechanical model aimed at defining the depth to the magma chamber. Since this work, the definition of the stress state in the half space comprised between the magma reservoir and the surface (modelled either as a flat surface or a surface comprising a volcanic edifice) was considered the key point in reconstructing dike propagation paths from the magma chamber. In fact, this process is generally seen as the propagation in an elastic media of purely tensional joints (mode I or opening mode propagation), which follow trajectories perpendicular to the least compressive principal stress axis. Later works generally used different continuum mechanics methodologies (analytic, BEM, FEM) to solve the problem of a pressure source (the magma chamber, either a point source or a finite volume) in an elastic (in some cases heterogeneous) half space (bounded by a flat topography or topped by a "volcano"). All these models (with a few limited exceptions) disregard the effect of the regional stress field, which is caused by tectonic boundary forces and gravitational body load, and consider only the pressure source represented by the magma chamber (review in Gudmundsson, 2006). However, this is only a (sometimes subordinate) component of the total stress field. Grosfils (2007) first introduced the gravitational load (but not tectonic stresses) in an elastic model solved with FEM in a 2D axisymmetric half-space, showing that "failure to incorporate gravitational loading correctly" affect the calculated stress pattern and many of the predictions that can be drawn from the models. In this contribution we report on modelling results that include: 2D axisymmetric or true
DEFF Research Database (Denmark)
Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca
2010-01-01
We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....
MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases
International Nuclear Information System (INIS)
Chang, J.
2015-01-01
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequential events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi
MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases
Energy Technology Data Exchange (ETDEWEB)
Chang, J. [New York Weill Cornell Medical Ctr (United States)
2015-06-15
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequential events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi
International Nuclear Information System (INIS)
Smith, Ryan L.; Millar, Jeremy L.; Panettieri, Vanessa; Mason, Natasha; Lancaster, Craig; Francih, Rick D.
2015-01-01
To investigate how the dwell time deviation constraint (DTDC) parameter, applied to inverse planning by simulated annealing (IPSA) optimisation limits large dwell times from occurring in each catheter and to characterise the effect on the resulting dosimetry for prostate high dose rate (HDR) brachytherapy treatment plans. An unconstrained IPSA optimised treatment plan, using the Oncentra Brachytherapy treatment planning system (version 4.3, Nucletron an Elekta company, Elekta AB, Stockholm, Sweden), was generated for 20 consecutive HDR prostate brachytherapy patients, with the DTDC set to zero. Successive constrained optimisation plans were also created for each patient by increasing the DTDC parameter by 0.2, up to a maximum value of 1.0. We defined a “plan modulation index”, to characterise the change of dwell time modulation as the DTDC parameter was increased. We calculated the dose volume histogram indices for the PTV (D90, V100, V150, V200%) and urethra (D10%) to characterise the effect on the resulting dosimetry. The average PTV D90% decreases as the DTDC is applied, on average by only 1.5 %, for a DTDC = 0.4. The measures of high dose regions in the PTV, V150 and V200%, increase on average by less than 5 and 2 % respectively. The net effect of DTDC on the modulation of dwell times has been characterised by the introduction of the plan modulation index. DTDC applied during IPSA optimisation of HDR prostate brachytherapy plans reduce the occurrence of large isolated dwell times within individual catheters. The mechanism by which DTDC works has been described and its effect on the modulation of dwell times has been characterised. The authors recommend using a DTDC parameter no greater than 0.4 to obtain a plan with dwell time modulation comparable to a geometric optimised plan. This yielded on average a 1.5 % decrease in PTV coverage and an acceptable increase in V150%, without compromising the urethral dose.
Turner, Joanna; Meng, Qinggang; Schaefer, Gerald; Whitbrook, Amanda; Soltoggio, Andrea
2017-09-28
This paper considers the problem of maximizing the number of task allocations in a distributed multirobot system under strict time constraints, where other optimization objectives need also be considered. It builds upon existing distributed task allocation algorithms, extending them with a novel method for maximizing the number of task assignments. The fundamental idea is that a task assignment to a robot has a high cost if its reassignment to another robot creates a feasible time slot for unallocated tasks. Multiple reassignments among networked robots may be required to create a feasible time slot and an upper limit to this number of reassignments can be adjusted according to performance requirements. A simulated rescue scenario with task deadlines and fuel limits is used to demonstrate the performance of the proposed method compared with existing methods, the consensus-based bundle algorithm and the performance impact (PI) algorithm. Starting from existing (PI-generated) solutions, results show up to a 20% increase in task allocations using the proposed method.
Validating the 5Fs mnemonic for cholelithiasis: time to include family history.
LENUS (Irish Health Repository)
Bass, Gary
2013-11-01
The time-honoured mnemonic of \\'5Fs\\' is a reminder to students that patients with upper abdominal pain and who conform to a profile of \\'fair, fat, female, fertile and forty\\' are likely to have cholelithiasis. We feel, however, that a most important \\'F\\'-that for \\'family history\\'-is overlooked and should be introduced to enhance the value of a useful aide memoire.
Optimal Design and Real Time Implementation of Autonomous Microgrid Including Active Load
Mohamed A. Hassan; Muhammed Y. Worku; Mohamed A. Abido
2018-01-01
Controller gains and power-sharing parameters are the main parameters affect the dynamic performance of the microgrid. Considering an active load to the autonomous microgrid, the stability problem will be more involved. In this paper, the active load effect on microgrid dynamic stability is explored. An autonomous microgrid including three inverter-based distributed generations (DGs) with an active load is modeled and the associated controllers are designed. Controller gains of the inverters ...
Space Weather opportunities from the Swarm mission including near real time applications
DEFF Research Database (Denmark)
Stolle, Claudia; Floberghagen, Rune; Luehr, Hermann
2013-01-01
Sophisticated space weather monitoring aims at nowcasting and predicting solar-terrestrial interactions because their effects on the ionosphere and upper atmosphere may seriously impact advanced technology. Operating alert infrastructures rely heavily on ground-based measurements and satellite...... these products in timely manner will add significant value in monitoring present space weather and helping to predict the evolution of several magnetic and ionospheric events. Swarm will be a demonstrator mission for the valuable application of LEO satellite observations for space weather monitoring tools....
A model for Huanglongbing spread between citrus plants including delay times and human intervention
Vilamiu, Raphael G. d'A.; Ternes, Sonia; Braga, Guilherme A.; Laranjeira, Francisco F.
2012-09-01
The objective of this work was to present a compartmental deterministic mathematical model for representing the dynamics of HLB disease in a citrus orchard, including delay in the disease's incubation phase in the plants, and a delay period on the nymphal stage of Diaphorina citri, the most important HLB insect vector in Brazil. Numerical simulations were performed to assess the possible impacts of human detection efficiency of symptomatic plants, as well as the influence of a long incubation period of HLB in the plant.
Ozbek, Müge; Bindemann, Markus
2011-10-01
The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000ms, or for an unlimited duration. Peak accuracy was reached within 2000ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Merkin, V. G.; Wiltberger, M. J.; Zhang, B.; Liu, J.; Wang, W.; Dimant, Y. S.; Oppenheim, M. M.; Lyon, J.
2017-12-01
During geomagnetic storms the magnetosphere-ionosphere-thermosphere system becomes activated in ways that are unique to disturbed conditions. This leads to emergence of physical feedback loops that provide tighter coupling between the system elements, often operating across disparate spatial and temporal scales. One such process that has recently received renewed interest is the generation of microscopic ionospheric turbulence in the electrojet regions (electrojet turbulence, ET) that results from strong convective electric fields imposed by the solar wind-magnetosphere interaction. ET leads to anomalous electron heating and generation of non-linear Pedersen current - both of which result in significant increases in effective ionospheric conductances. This, in turn, provides strong non-linear feedback on the magnetosphere. Recently, our group has published two studies aiming at a comprehensive analysis of the global effects of this microscopic process on the magnetosphere-ionosphere-thermosphere system. In one study, ET physics was incorporated in the TIEGCM model of the ionosphere-thermosphere. In the other study, ad hoc corrections to the ionospheric conductances based on ET theory were incorporated in the conductance module of the Lyon-Fedder-Mobarry (LFM) global magnetosphere model. In this presentation, we make the final step toward the full coupling of the microscopic ET physics within our global coupled model including LFM, the Rice Convection Model (RCM) and TIEGCM. To this end, ET effects are incorporated in the TIEGCM model and propagate throughout the system via thus modified TIEGCM conductances. The March 17, 2013 geomagnetic storm is used as a testbed for these fully coupled simulations, and the results of the model are compared with various ionospheric and magnetospheric observatories, including DMSP, AMPERE, and Van Allen Probes. Via these comparisons, we investigate, in particular, the ET effects on the global magnetosphere indicators such as the
Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses
Boyd, Oliver S.
2012-01-01
Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
Energy Technology Data Exchange (ETDEWEB)
Chen, Bin; Maddumage, Prasad [Research Computing Center, Department of Scientific Computing, Florida State University, Tallahassee, FL 32306 (United States); Kantowski, Ronald; Dai, Xinyu; Baron, Eddie, E-mail: bchen3@fsu.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)
2015-05-15
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
International Nuclear Information System (INIS)
Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie
2015-01-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python
Topology Optimization including Inequality Buoyancy Constraints
Picelli, R.; Van Dijk, R.; Vicente, W.M.; Pavanello, R.; Langelaar, M.; Van Keuen, A.
2014-01-01
This paper presents an evolutionary topology optimization method for applications in design of completely submerged buoyant devices with design-dependent fluid pressure loading. This type of structures aid rig installations and pipeline transportation in all water depths in offshore structural
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia Posso, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...
Pleuger, Jan; Mancktelow, Neil
2010-05-01
. This SW-side-up faulting was accommodated by the CF. For the NE segment of the CF, K-Ar data of 19-26 Ma (Zingg & Hunziker 1990) indicate that the fault was active at that time but these data cannot be linked to a specific displacement sense. Dextral faulting was active at c. 20 Ma as constrained by K-Ar ages of fault gouge from Riedel structures of the Insubric Fault (Zwingmann & Mancktelow 2004). One of the aims of our ongoing work in the TopoAlps project "4D kinematics of the Neogene western Alps" is to provide further absolute age constraints for the different kinematic stages of faulting along the CF by radiometric age dating. Concerning the displacement amount, our results suggest that the CF is a stretching fault accumulating dextral displacement toward the E. This interpretation is supported by the observation that while dextral displacement senses at the ICF are more or less balanced by sinistral ones at the ECF, dextral shear senses become more and more dominant toward Val d'Ossola in the NE. Referneces: Carraro F. & Ferrara G. 1968: Schweiz. Mineral. Petrogr. Mitt. 48, 75-80. Lanza R. 1977: Schweiz. Mineral. Petrogr. Mitt. 57, 281-290. Romer R.L., Schärer U. & Steck A. 1996: Contrib. Mineral. Petrol. 123, 138-158. Scheuring B., Ahrendt H., Hunziker J.C. & Zingg A. 1974: Geol. Rundsch. 63, 305-326. Schmid S.M., Zingg, A. & Handy, M. 1987: Tectonophysics 135, 47-66. Zingg A. & Hunzker J.C. 1990: Eclogae geol. Helv. 83, 629-644. Zwingmann H. & Mancktelow N.S. 2004: Earth Planet. Sci. Lett. 223, 415-425.
Directory of Open Access Journals (Sweden)
Niti Ashish Kumar Desai
2015-12-01
Full Text Available Business Strategies are formulated based on an understanding of customer needs. This requires development of a strategy to understand customer behaviour and buying patterns, both current and future. This involves understanding, first how an organization currently understands customer needs and second predicting future trends to drive growth. This article focuses on purchase trend of customer, where timing of purchase is more important than association of item to be purchased, and which can be found out with Sequential Pattern Mining (SPM methods. Conventional SPM algorithms worked purely on frequency identifying patterns that were more frequent but suffering from challenges like generation of huge number of uninteresting patterns, lack of user’s interested patterns, rare item problem, etc. Article attempts a solution through development of a SPM algorithm based on various constraints like Gap, Compactness, Item, Recency, Profitability and Length along with Frequency constraint. Incorporation of six additional constraints is as well to ensure that all patterns are recently active (Recency, active for certain time span (Compactness, profitable and indicative of next timeline for purchase (Length―Item―Gap. The article also attempts to throw light on how proposed Constraint-based Prefix Span algorithm is helpful to understand buying behaviour of customer which is in formative stage.
DEFF Research Database (Denmark)
Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.
2017-01-01
A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...
Design with Nonlinear Constraints
Tang, Chengcheng
2015-01-01
. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application
Design with Nonlinear Constraints
Tang, Chengcheng
2015-12-10
Most modern industrial and architectural designs need to satisfy the requirements of their targeted performance and respect the limitations of available fabrication technologies. At the same time, they should reflect the artistic considerations and personal taste of the designers, which cannot be simply formulated as optimization goals with single best solutions. This thesis aims at a general, flexible yet e cient computational framework for interactive creation, exploration and discovery of serviceable, constructible, and stylish designs. By formulating nonlinear engineering considerations as linear or quadratic expressions by introducing auxiliary variables, the constrained space could be e ciently accessed by the proposed algorithm Guided Projection, with the guidance of aesthetic formulations. The approach is introduced through applications in different scenarios, its effectiveness is demonstrated by examples that were difficult or even impossible to be computationally designed before. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application is extended to developable surfaces including origami with curved creases. Finally, general approaches to extend hard constraints and soft energies are discussed, followed by a concluding remark outlooking possible future studies.
Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele
2014-04-01
The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Shoemaker, Ian M.; Murase, Kohta
2018-04-01
The Laser Interferometer Gravitational-Wave Observatory (LIGO) has recently discovered gravitational waves (GWs) from its first neutron star-neutron star merger at a distance of ˜40 Mpc from the Earth. The associated electromagnetic (EM) detection of the event, including the short gamma-ray burst within Δ t ˜2 s after the GW arrival, can be used to test various aspects of sources physics and GW propagation. Using GW170817 as the first GW-EM example, we show that this event provides a stringent direct test that GWs travel at the speed of light. The gravitational potential of the Milky Way provides a potential source of Shapiro time delay difference between the arrival of photons and GWs, and we demonstrate that the nearly coincident detection of the GW and EM signals can yield strong limits on anomalous gravitational time delay, through updating the previous limits taking into account details of Milky Way's gravitational potential. Finally, we also obtain an intriguing limit on the size of the prompt emission region of GRB 170817A, and discuss implications for the emission mechanism of short gamma-ray bursts.
International Nuclear Information System (INIS)
Provost, J.
1984-01-01
Accurate tests of the theory of stellar structure and evolution are available from the Sun's observations. The solar constraints are reviewed, with a special attention to the recent progress in observing global solar oscillations. Each constraint is sensitive to a given region of the Sun. The present solar models (standard, low Z, mixed) are discussed with respect to neutrino flux, low and high degree five-minute oscillations and low degree internal gravity modes. It appears that actually there do not exist solar models able to fully account for all the observed quantities. (Auth.)
Andersson, P. B. U.; Kropp, W.
2008-11-01
Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
International Nuclear Information System (INIS)
Lorencak, M.; Seward, D.; Burg, J.-P.
1999-01-01
Full text: Nine zircon and eighteen apatite fission-track analyses have been made in order to determine the low temperature cooling history of the Shuswap metamorphic core complex, western Canada. The zircons vary in apparent age from 53.9 ± 5.6 to 37.5 ± 5.0 Ma and the apatites from 48.5 ± 3.2 to 27.7 ± 3.4 Ma. In the footwall of the detachment faults defining the core complex, the cooling histories show a similarity until temperatures of ∼250 C were reached at about 45 Ma. From then on, activity on two normal faults, the Columbia River Fault and the Victor Creek Fault, controlled the regional cooling pattern. The ages and the combination of ages fall into four groups and on the basis of the fission-track data, we suggest that the region can now be divided into four thermotectonic units which are the result of differing tectonic controls during regional extension. Additionally, a complete cooling history of the Shuswap core complex can now be reconstructed, using constraints from U-Pb, Rb-Sr and K-Ar age data from several authors as well as the fission track results presented here. Copyright (1999) Geological Society of Australia
Klinkusch, Stefan; Saalfrank, Peter; Klamroth, Tillmann
2009-09-21
We report simulations of laser-pulse driven many-electron dynamics by means of a simple, heuristic extension of the time-dependent configuration interaction singles (TD-CIS) approach. The extension allows for the treatment of ionizing states as nonstationary states with a finite, energy-dependent lifetime to account for above-threshold ionization losses in laser-driven many-electron dynamics. The extended TD-CIS method is applied to the following specific examples: (i) state-to-state transitions in the LiCN molecule which correspond to intramolecular charge transfer, (ii) creation of electronic wave packets in LiCN including wave packet analysis by pump-probe spectroscopy, and, finally, (iii) the effect of ionization on the dynamic polarizability of H(2) when calculated nonperturbatively by TD-CIS.
Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.
2000-01-01
Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ˜3×10 6 km 2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96±0.16, n=64 out of 79 samples, 2σ) and initial Nd-Pb isotopic compositions (e.g. 143Nd/ 144Nd in=0.51291±3, ɛNdi=7.3±0.6, 206Pb/ 204Pb in=18.86±0.12, n=54 out of 66, 2σ). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/ 4He in olivines of enriched picrites at Quepos are ˜12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on
Liu, Jinxing
2013-04-24
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).
van der Linden, Willem J.; Scrams, David J.; Schnipke, Deborah L.
2003-01-01
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has
International Nuclear Information System (INIS)
Jones, P.M.S.
1987-01-01
There are considerable incentives for the use of nuclear in preference to other sources for base load electricity generation in most of the developed world. These are economic, strategic, environmental and climatic. However, there are two potential constraints which could hinder the development of nuclear power to its full economic potential. These are public opinion and financial regulations which distort the nuclear economic advantage. The concerns of the anti-nuclear lobby are over safety, (especially following the Chernobyl accident), the management of radioactive waste, the potential effects of large scale exposure of the population to radiation and weapons proliferation. These are discussed. The financial constraint is over two factors, the availability of funds and the perception of cost, both of which are discussed. (U.K.)
International Nuclear Information System (INIS)
Sugier, A.
2003-01-01
The selected new constraints should be consistent with the scale of concern i.e. be expressed roughly as fractions or multiples of the average annual background. They should take into account risk considerations and include the values of the currents limits, constraints and other action levels. The recommendation is to select four leading values for the new constraints: 500 mSv ( single event or in a decade) as a maximum value, 0.01 mSv/year as a minimum value; and two intermediate values: 20 mSv/year and 0.3 mSv/year. This new set of dose constraints, representing basic minimum standards of protection for the individuals taking into account the specificity of the exposure situations are thus coherent with the current values which can be found in ICRP Publications. A few warning need however to be noticed: There is no more multi sources limit set by ICRP. The coherence between the proposed value of dose constraint (20 mSv/year) and the current occupational dose limit of 20 mSv/year is valid only if the workers are exposed to one single source. When there is more than one source, it will be necessary to apportion. The value of 1000 mSv lifetimes used for relocation can be expressed into annual dose, which gives approximately 10 mSv/year and is coherent with the proposed dose constraint. (N.C.)
2016-06-01
This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...
Directory of Open Access Journals (Sweden)
Wei Tu
2015-10-01
Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
PLATTEEUW, M; VANEERDEN, MR
1995-01-01
TWO Cormorant colonies in The Netherlands (Naardermeer and Oostvaardersplassen), exploiting the same water bodies but situated at different distances from them, were compared with respect to daily variations in exact fishing sites and corresponding variations in time budget and fish consumption.
Long, Leroy L; Srinivasan, Manoj
2013-04-06
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.
Misconceptions and constraints
International Nuclear Information System (INIS)
Whitten, M.; Mahon, R.
2005-01-01
In theory, the sterile insect technique (SIT) is applicable to a wide variety of invertebrate pests. However, in practice, the approach has been successfully applied to only a few major pests. Chapters in this volume address possible reasons for this discrepancy, e.g. Klassen, Lance and McInnis, and Robinson and Hendrichs. The shortfall between theory and practice is partly due to the persistence of some common misconceptions, but it is mainly due to one constraint, or a combination of constraints, that are biological, financial, social or political in nature. This chapter's goal is to dispel some major misconceptions, and view the constraints as challenges to overcome, seeing them as opportunities to exploit. Some of the common misconceptions include: (1) released insects retain residual radiation, (2) females must be monogamous, (3) released males must be fully sterile, (4) eradication is the only goal, (5) the SIT is too sophisticated for developing countries, and (6) the SIT is not a component of an area-wide integrated pest management (AW-IPM) strategy. The more obvious constraints are the perceived high costs of the SIT, and the low competitiveness of released sterile males. The perceived high up-front costs of the SIT, their visibility, and the lack of private investment (compared with alternative suppression measures) emerge as serious constraints. Failure to appreciate the true nature of genetic approaches, such as the SIT, may pose a significant constraint to the wider adoption of the SIT and other genetically-based tactics, e.g. transgenic genetically modified organisms (GMOs). Lack of support for the necessary underpinning strategic research also appears to be an important constraint. Hence the case for extensive strategic research in ecology, population dynamics, genetics, and insect behaviour and nutrition is a compelling one. Raising the competitiveness of released sterile males remains the major research objective of the SIT. (author)
De Muro, Sandro; Buosi, Carla; Pusceddu, Nicola; Frongia, Paolo; Passarella, Marinella; Ibba, Angelo
2016-04-01
The coastal zones of the Mediterranean have undergone increasing pressure over the last century. The intensifying coastal development and the increasing tourist impact have led to an intense transformation of the coastlines and adjacent marine areas. The beach and the coastal dune play an important role in protecting the coastline. Thus, the study of its geomorphological evolution and of its anthropic modification is fundamental in order to adopt the best management practices. In this regard, the LIFE Project (LIFE13NAT/IT/001013) SOSS DUNES (Safeguard and management Of South-western Sardinian Dunes) aims to safeguard the dune habitats and the beach system in a site belonging to the Natura 2000 network, an EUwide network of nature protection areas established under the 1992 Habitats Directive. This project is focused on a microtidal wave-dominated embayment located in south western Sardinia (Italy, Mediterranean Sea) called Porto Pino beach comprised in the SCI (Site of Community Importance) "Promontory, dunes and wetland of Porto Pino (ITB040025)". This research aims to investigate the geomorphological processes, the evolution and the main human impacts on Porto Pino beach as an useful tool for both conservation and coastal management. The coastal area of Porto Pino is represented by sandy shorelines extending for a total length of 5 km characterized by a wide primary and secondary dune systems, a backshore wetland lagoon and marsh area arranged parallel to the coastline. This littoral area can be ideally divided into three parts: the first, about 600 m long, in the north-west part characterized by the highest human pressure due to touristic activity on the foredunes and deposition of beach wrack; the second part in the south-east, about 1100 m long, characterized by a complex dune system (primary and secondary foredunes); and the third southernmost part included in a military area, about 3300 m long, characterized by transgressive dune system with low human
Directory of Open Access Journals (Sweden)
Mohamed K. Salah
2013-03-01
Full Text Available The Sinai Peninsula has been recognized as a subplate of the African Plate located at the triple junction of the Gulf of Suez rift, the Dead Sea Transform fault, and the Red Sea rift. The upper and lower crustal structures of this tectonically active, rapidly developing region are yet poorly understood because of many limitations. For this reason, a set of P- and S-wave travel times recorded at 14 seismic stations belonging to the Egyptian National Seismographic Network (ENSN from 111 local and regional events are analyzed to investigate the crustal structures and the locations of the seismogenic zones beneath central and southern Sinai. Because the velocity model used for routine earthquake location by ENSN is one-dimensional, the travel-time residuals will show lateral heterogeneity of the velocity structures and unmodeled vertical structures. Seismic activity is strong along the eastern and southern borders of the study area but low to moderate along the northern boundary and the Gulf of Suez to the west. The crustal Vp/Vs ratio is 1.74 from shallow (depth ≤ 10 km earthquakes and 1.76 from deeper (depth > 10 km crustal events. The majority of the regional and local travel-time residuals are positive relative to the Preliminary Reference Earth Model (PREM, implying that the seismic stations are located above widely distributed, tectonically-induced low-velocity zones. These low-velocity zones are mostly related to the local crustal faults affecting the sedimentary section and the basement complex as well as the rifting processes prevailing in the northern Red Sea region and the ascending of hot mantle materials along crustal fractures. The delineation of these low-velocity zones and the locations of big crustal earthquakes enable the identification of areas prone to intense seismotectonic activities, which should be excluded from major future development projects and large constructions in central and southern Sinai.
Directory of Open Access Journals (Sweden)
Chunlai Li
2017-07-01
Full Text Available This paper proposes an energy and reserve joint dispatch model based on a robust optimization approach in real-time electricity markets, considering wind power generation uncertainties as well as zonal reserve constraints under both normal and N-1 contingency conditions. In the proposed model, the operating reserves are classified as regulating reserve and spinning reserve according to the response performance. More specifically, the regulating reserve is usually utilized to reduce the gap due to forecasting errors, while the spinning reserve is commonly adopted to enhance the ability for N-1 contingencies. Since the transmission bottlenecks may inhibit the deliverability of reserve, the zonal placement of spinning reserve is considered in this paper to improve the reserve deliverability under the contingencies. Numerical results on the IEEE 118-bus test system show the effectiveness of the proposed model.
Constraints on the use of 137Cs as a time-marker to support CRS and SIT chronologies
International Nuclear Information System (INIS)
Abril, J.M.
2004-01-01
CRS and SIT are two 210 Pb-based models widely used in the radiometric dating of recent sediments. 210 Pb chronologies should be validated using at least one independent tracer, such as 137 Cs. This paper demonstrates that simple methods based on the identification of 137 Cs fallout peaks cannot provide a definitive support for CRS and SIT chronologies. Two main arguments will support this assertion: Firstly, the 137 Cs time-marks cannot support a CRS or SIT chronology if the derived sedimentation rates cannot explain the whole 137 Cs activity profile without postulating mixing. Secondly, the support by the 137 Cs time-marks for a given CRS or SIT chronology cannot be considered as definitive if other dating models can equally explain the whole set of data, thereby producing a different chronology. Several case studies selected from the literature are used to support the present discussion. - Simple methods based on the identification of 137 Cs fallout peaks cannot provide a definite support for CRS and SIT chronologies
Liu, Jinxing; El Sayed, Tamer S.
2013-01-01
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice
Directory of Open Access Journals (Sweden)
T. D. van Ommen
2012-07-01
Full Text Available Antarctic ice cores provide clear evidence of a close coupling between variations in Antarctic temperature and the atmospheric concentration of CO2 during the glacial/interglacial cycles of at least the past 800-thousand years. Precise information on the relative timing of the temperature and CO2 changes can assist in refining our understanding of the physical processes involved in this coupling. Here, we focus on the last deglaciation, 19 000 to 11 000 yr before present, during which CO2 concentrations increased by ~80 parts per million by volume and Antarctic temperature increased by ~10 °C. Utilising a recently developed proxy for regional Antarctic temperature, derived from five near-coastal ice cores and two ice core CO2 records with high dating precision, we show that the increase in CO2 likely lagged the increase in regional Antarctic temperature by less than 400 yr and that even a short lead of CO2 over temperature cannot be excluded. This result, consistent for both CO2 records, implies a faster coupling between temperature and CO2 than previous estimates, which had permitted up to millennial-scale lags.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Directory of Open Access Journals (Sweden)
Thomas ePfeil
2012-07-01
Full Text Available Large-scale neuromorphic hardware systems typically bear the trade-oﬀ be-tween detail level and required chip resources. Especially when implementingspike-timing-dependent plasticity, reduction in resources leads to limitations ascompared to ﬂoating point precision. By design, a natural modiﬁcation that savesresources would be reducing synaptic weight resolution. In this study, we give anestimate for the impact of synaptic weight discretization on diﬀerent levels, rangingfrom random walks of individual weights to computer simulations of spiking neuralnetworks. The FACETS wafer-scale hardware system oﬀers a 4-bit resolution ofsynaptic weights, which is shown to be suﬃcient within the scope of our networkbenchmark. Our ﬁndings indicate that increasing the resolution may not even beuseful in light of further restrictions of customized mixed-signal synapses. In ad-dition, variations due to production imperfections are investigated and shown tobe uncritical in the context of the presented study. Our results represent a generalframework for setting up and conﬁguring hardware-constrained synapses. We sug-gest how weight discretization could be considered for other backends dedicatedto large-scale simulations. Thus, our proposition of a good hardware veriﬁcationpractice may rise synergy eﬀects between hardware developers and neuroscientists.
Cunningham, John A; Thomas, Ceri-Wyn; Bengtson, Stefan; Kearns, Stuart L; Xiao, Shuhai; Marone, Federica; Stampanoni, Marco; Donoghue, Philip C J
2012-06-22
The Ediacaran Doushantuo biota has yielded fossils that include the oldest widely accepted record of the animal evolutionary lineage, as well as specimens with alleged bilaterian affinity. However, these systematic interpretations are contingent on the presence of key biological structures that have been reinterpreted by some workers as artefacts of diagenetic mineralization. On the basis of chemistry and crystallographic fabric, we characterize and discriminate phases of mineralization that reflect: (i) replication of original biological structure, and (ii) void-filling diagenetic mineralization. The results indicate that all fossils from the Doushantuo assemblage preserve a complex mélange of mineral phases, even where subcellular anatomy appears to be preserved. The findings allow these phases to be distinguished in more controversial fossils, facilitating a critical re-evaluation of the Doushantuo fossil assemblage and its implications as an archive of Ediacaran animal diversity. We find that putative subcellular structures exhibit fabrics consistent with preservation of original morphology. Cells in later developmental stages are not in original configuration and are therefore uninformative concerning gastrulation. Key structures used to identify Doushantuo bilaterians can be dismissed as late diagenetic artefacts. Therefore, when diagenetic mineralization is considered, there is no convincing evidence for bilaterians in the Doushantuo assemblage.
International Nuclear Information System (INIS)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2014-01-01
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents for a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible
Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam
2012-02-01
To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.
Seismological Constraints on Geodynamics
Lomnitz, C.
2004-12-01
Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Richard D Bergman
2012-01-01
Greenhouse gases (GHGs) trap infrared radiation emitting from the Earthâs surface to generate the âgreenhouse effectâ thus keeping the planet warm. Many natural activities including rotting vegetation emit GHGs such as carbon dioxide to produce this natural affect. However, in the last 200 years or so, human activity has increased the atmospheric concentrations of GHGs...
Directory of Open Access Journals (Sweden)
F. Radicioni
2017-05-01
Full Text Available The Tempio della Consolazione in Todi (16th cent. has always been one of the most significant symbols of the Umbrian landscape. Since the first times after its completion (1606 the structure has exhibited evidences of instability, due to foundation subsiding and/or seismic activity. Structural and geotechnical countermeasures have been undertaken on the Tempio and its surroundings from the 17th century until recent times. Until now a truly satisfactory analysis of the overall deformation and attitude of the building has not been performed, since the existing surveys record the overhangs of the pillars, the crack pattern or the subsidence over limited time spans. Describing the attitude of the whole church is in fact a complex operation due to the architectural character of the building, consisting of four apses (three polygonal and one semicircular covered with half domes, which surround the central area with the large dome. The present research aims to fill the gap of knowledge with a global study based on geomatic techniques for an accurate 3D reconstruction of geometry and attitude, integrated with a historical research on damage and interventions and a geotechnical analysis. The geomatic survey results from the integration of different techniques: GPS-GNSS for global georeferencing, laser scanning and digital photogrammetry for an accurate 3D reconstruction, high precision total station and geometric leveling for a direct survey of deformations and cracks, and for the alignment of the laser scans. The above analysis allowed to assess the dynamics of the cracks occurred in the last 25 years by a comparison with a previous survey. From the photographic colour associated to the point cloud was also possible to map the damp patches showing on the domes intrados, mapping their evolution over the last years.
Directory of Open Access Journals (Sweden)
Masternak Sebastian
2016-06-01
Full Text Available Alcohol dependence and its treatment is not an exactly resolved problem. Based on the EZOP [Epidemiology of Mental Disorders and Accessibility of Mental Health Care] survey, which included a regular analysis of the incidence of mental disorders in the population of adult Polish citizens, we were able to estimate that the problem of alcohol abuse in any period of life affects even 10.9% of the population aged 18-64 years, and those addicted represent 2.2% of the country’s population. The typical symptoms of alcohol dependence according to ICD-10, include alcohol craving, impaired ability to control alcohol consumption, withdrawal symptoms which appear when a heavy drinker stops drinking, alternating alcohol tolerance, growing neglect of other areas of life, and persistent alcohol intake despite clear evidence of its destructive effect on life. At the moment, the primary method of alcoholism treatment is psychotherapy. It aims to change the patient’s habits, behaviours, relationships, or the way of thinking. It seems that psychotherapy is irreplaceable in the treatment of alcoholism, but for many years now attempts have been made to increase the effectiveness of alcoholism treatment with pharmacological agents. In this article we will try to provide a description of medications which help patients sustain abstinence in alcoholism therapy with particular emphasis on baclofen.
Directory of Open Access Journals (Sweden)
Saurabh Vashishtha
Full Text Available There is a growing appreciation for the network biology that regulates the coordinated expression of molecular and cellular markers however questions persist regarding the identifiability of these networks. Here we explore some of the issues relevant to recovering directed regulatory networks from time course data collected under experimental constraints typical of in vivo studies. NetSim simulations of sparsely connected biological networks were used to evaluate two simple feature selection techniques used in the construction of linear Ordinary Differential Equation (ODE models, namely truncation of terms versus latent vector projection. Performance was compared with ODE-based Time Series Network Identification (TSNI integral, and the information-theoretic Time-Delay ARACNE (TD-ARACNE. Projection-based techniques and TSNI integral outperformed truncation-based selection and TD-ARACNE on aggregate networks with edge densities of 10-30%, i.e. transcription factor, protein-protein cliques and immune signaling networks. All were more robust to noise than truncation-based feature selection. Performance was comparable on the in silico 10-node DREAM 3 network, a 5-node Yeast synthetic network designed for In vivo Reverse-engineering and Modeling Assessment (IRMA and a 9-node human HeLa cell cycle network of similar size and edge density. Performance was more sensitive to the number of time courses than to sample frequency and extrapolated better to larger networks by grouping experiments. In all cases performance declined rapidly in larger networks with lower edge density. Limited recovery and high false positive rates obtained overall bring into question our ability to generate informative time course data rather than the design of any particular reverse engineering algorithm.
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.
2016-01-01
Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979
Directory of Open Access Journals (Sweden)
Adriana Peci
2016-08-01
Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.
Reduction Of Constraints For Coupled Operations
International Nuclear Information System (INIS)
Raszewski, F.; Edwards, T.
2009-01-01
The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated much of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al 2 O 3 (ge) 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% (ΣM 2 O 2 O 3 constraint to 4 wt% (Al 2 O 3 (ge) 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al 2 O 3 and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of
Distance Constraint Satisfaction Problems
Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael
We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.
Transmission and capacity pricing and constraints
International Nuclear Information System (INIS)
Fusco, M.
1999-01-01
A series of overhead viewgraphs accompanied this presentation which discussed the following issues regarding the North American electric power industry: (1) capacity pricing transmission constraints, (2) nature of transmission constraints, (3) consequences of transmission constraints, and (4) prices as market evidence. Some solutions suggested for pricing constraints included the development of contingent contracts, back-up power in supply regions, and new line capacity construction. 8 tabs., 20 figs
Ant colony optimization and constraint programming
Solnon, Christine
2013-01-01
Ant colony optimization is a metaheuristic which has been successfully applied to a wide range of combinatorial optimization problems. The author describes this metaheuristic and studies its efficiency for solving some hard combinatorial problems, with a specific focus on constraint programming. The text is organized into three parts. The first part introduces constraint programming, which provides high level features to declaratively model problems by means of constraints. It describes the main existing approaches for solving constraint satisfaction problems, including complete tree search
Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R
2016-09-01
Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prevot, Thomas; Homola, Jeffrey R.; Martin, Lynne H.; Mercer, Joey S.; Cabrall, Christopher C.
2011-01-01
In this paper we discuss results from a recent high fidelity simulation of air traffic control operations with automated separation assurance in the presence of weather and time-constraints. We report findings from a human-in-the-loop study conducted in the Airspace Operations Laboratory (AOL) at the NASA Ames Research Center. During four afternoons in early 2010, fifteen active and recently retired air traffic controllers and supervisors controlled high levels of traffic in a highly automated environment during three-hour long scenarios, For each scenario, twelve air traffic controllers operated eight sector positions in two air traffic control areas and were supervised by three front line managers, Controllers worked one-hour shifts, were relieved by other controllers, took a 3D-minute break, and worked another one-hour shift. On average, twice today's traffic density was simulated with more than 2200 aircraft per traffic scenario. The scenarios were designed to create peaks and valleys in traffic density, growing and decaying convective weather areas, and expose controllers to heavy and light metering conditions. This design enabled an initial look at a broad spectrum of workload, challenge, boredom, and fatigue in an otherwise uncharted territory of future operations. In this paper we report human/system integration aspects, safety and efficiency results as well as airspace throughput, workload, and operational acceptability. We conclude that, with further refinements. air traffic control operations with ground-based automated separation assurance can be an effective and acceptable means to routinely provide very high traffic throughput in the en route airspace.
Mackeen, Mukram; Almond, Andrew; Cumpstey, Ian; Enis, Seth C; Kupce, Eriks; Butters, Terry D; Fairbanks, Antony J; Dwek, Raymond A; Wormald, Mark R
2006-06-07
The experimental determination of oligosaccharide conformations has traditionally used cross-linkage 1H-1H NOE/ROEs. As relatively few NOEs are observed, to provide sufficient conformational constraints this method relies on: accurate quantification of NOE intensities (positive constraints); analysis of absent NOEs (negative constraints); and hence calculation of inter-proton distances using the two-spin approximation. We have compared the results obtained by using 1H 2D NOESY, ROESY and T-ROESY experiments at 500 and 700 MHz to determine the conformation of the terminal Glc alpha1-2Glc alpha linkage in a dodecasaccharide and a related tetrasaccharide. For the tetrasaccharide, the NOESY and ROESY spectra produced the same qualitative pattern of linkage cross-peaks but the quantitative pattern, the relative peak intensities, was different. For the dodecasaccharide, the NOESY and ROESY spectra at 500 MHz produced a different qualitative pattern of linkage cross-peaks, with fewer peaks in the NOESY spectrum. At 700 MHz, the NOESY and ROESY spectra of the dodecasaccharide produced the same qualitative pattern of peaks, but again the relative peak intensities were different. These differences are due to very significant differences in the local correlation times for different proton pairs across this glycosidic linkage. The local correlation time for each proton pair was measured using the ratio of the NOESY and T-ROESY cross-relaxation rates, leaving the NOESY and ROESY as independent data sets for calculating the inter-proton distances. The inter-proton distances calculated including the effects of differences in local correlation times give much more consistent results.
Moor, C C; Wapenaar, M; Miedema, J R; Geelhoed, J J M; Chandoesing, P P; Wijsenbeek, M S
2018-05-29
In idiopathic pulmonary fibrosis (IPF), home monitoring experiences are limited, not yet real-time available nor implemented in daily care. We evaluated feasibility and potential barriers of a new home monitoring program with real-time wireless home spirometry in IPF. Ten patients with IPF were asked to test this home monitoring program, including daily home spirometry, for four weeks. Measurements of home and hospital spirometry showed good agreement. All patients considered real-time wireless spirometry useful and highly feasible. Both patients and researchers suggested relatively easy solutions for the identified potential barriers regarding real-time home monitoring in IPF.
Production Team Maintenance: Systemic Constraints Impacting Implementation
National Research Council Canada - National Science Library
Moore, Terry
1997-01-01
.... Identified constraints included: integrating the PTM positioning strategy into the AMC corporate strategic planning process, manpower modeling simulator limitations, labor force authorizations and decentralization...
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Valencia, Frank Dan
Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...
International Nuclear Information System (INIS)
Perez Perez, L.A.
2008-12-01
A time-dependent amplitude analysis of B 0 → K S 0 π + π - decays is performed to extract the CP violation parameters of f 0 (980)K S 0 and ρ 0 (770)K S 0 , and direct CP asymmetries of K * (892) ± π ± . The results are obtained from a data sample of (383 ± 3)*10 6 BB-bar decays, collected with the BaBar detector at the PEP-II asymmetric-energy B factory at SLAC. Two solutions are found, with equivalent goodness-of-fit merits. Including systematic and Dalitz plot model uncertainties, the combined confidence interval for values of β(eff) in B 0 decays to f 0 (980)K S 0 is 18 degrees 0 decays to f 0 (980)K S 0 is excluded at 3.5 σ, including systematics. For B 0 decays to ρ 0 (770)K S 0 , the combined confidence interval is -9 degrees * (892) ± π ± the measured direct CP asymmetry parameter is A(CP) -0.20 ± 0.10 ± 0.01 ± 0.02. The measured phase difference between the decay amplitudes of B 0 → K * (892) + π - and B-bar 0 → K * (892) - π + excludes the [-132 degrees: +25 degrees] interval (at 95% C.L.). Branching fractions and CP asymmetries are measured for all significant intermediate resonant modes. The measurements on ρ 0 (770)K S 0 and K *± (892)π ± are used as inputs to a phenomenological analysis of B → K * π and B → ρK decays based solely on SU(2) isospin symmetry. Adding external information on the CKM matrix, constraints on the hadronic parameter space are set. For B → K * π, the preferred intervals for color-allowed electroweak penguins are marginally compatible with theoretical expectations. The constraints on CKM parameters are dominated by theoretical uncertainties. A prospective study, based on the expected increase in precision from measurements at LHCb, and at future programs such as Super-B or Belle-upgrade, illustrates the physics potential of this approach. (author)
DEFF Research Database (Denmark)
Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...
2010-07-01
... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Does the 2-year time period in Â§ 302-2.8 include time that I cannot travel and/or transport my household effects due to... time that I cannot travel and/or transport my household effects due to shipping restrictions to or from...
Bennett, Vickie C.; Brandon, alan D.; Hiess, Joe; Nutman, Allen P.
2007-01-01
Increasingly precise data from a range of isotopic decay schemes, including now extinct parent isotopes, from samples of the Earth, Mars, Moon and meteorites are rapidly revising our views of early planetary differentiation. Recognising Nd-142 isotopic variations in terrestrial rocks (which can only arise from events occurring during the lifetime of now extinct Sm-146 [t(sub 1/2)=103 myr]) has been an on-going quest starting with Harper and Jacobsen. The significance of Nd-142 variations is that they unequivocally reflect early silicate differentiation processes operating in the first 500 myr of Earth history, the key time period between accretion and the beginning of the rock record. The recent establishment of the existence of Nd-142 variations in ancient Earth materials has opened a new range of questions including, how widespread is the evidence of early differentiation, how do Nd-142 compositions vary with time, rock type and geographic setting, and, combined with other types of isotopic and geochemical data, what can Nd-142 isotopic variations reveal about the timing and mechanisms of early terrestrial differentiation? To explore these questions we are determining high precision Nd-142, Nd-143 and Hf-176 isotopic compositions from the oldest well preserved (3.63- 3.87 Ga), rock suites from the extensive early Archean terranes of southwest Greenland and western Australia.
Directory of Open Access Journals (Sweden)
J Rasmus Nielsen
Full Text Available Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua and whiting (Merlangius merlangus, a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size. The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year.
Orthology and paralogy constraints: satisfiability and consistency.
Lafond, Manuel; El-Mabrouk, Nadia
2014-01-01
A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family G. But is a given set C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for G? While previous studies have focused on full sets of constraints, here we consider the general case where C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is C satisfiable, i.e. can we find an event-labeled gene tree G inducing C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.
El-Amin, Mohamed
2017-11-23
In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.
El-Amin, Mohamed; Kou, Jisheng; Sun, Shuyu
2017-01-01
In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.
Stability Constraints for Robust Model Predictive Control
Directory of Open Access Journals (Sweden)
Amanda G. S. Ottoni
2015-01-01
Full Text Available This paper proposes an approach for the robust stabilization of systems controlled by MPC strategies. Uncertain SISO linear systems with box-bounded parametric uncertainties are considered. The proposed approach delivers some constraints on the control inputs which impose sufficient conditions for the convergence of the system output. These stability constraints can be included in the set of constraints dealt with by existing MPC design strategies, in this way leading to the “robustification” of the MPC.
International Nuclear Information System (INIS)
Magae, J.; Furukawa, C.; Kawakami, Y.; Hoshi, Y.; Ogata, H.
2003-01-01
Full text: Because biological responses to radiation are complex processes dependent on irradiation time as well as total dose, it is necessary to include dose, dose-rate and irradiation time simultaneously to predict the risk of low dose-rate irradiation. In this study, we analyzed quantitative relationship among dose, irradiation time and dose-rate, using chromosomal breakage and proliferation inhibition of human cells. For evaluation of chromosome breakage we assessed micronuclei induced by radiation. U2OS cells, a human osteosarcoma cell line, were exposed to gamma-ray in irradiation room bearing 50,000 Ci 60 Co. After the irradiation, they were cultured for 24 h in the presence of cytochalasin B to block cytokinesis, cytoplasm and nucleus were stained with DAPI and propidium iodide, and the number of binuclear cells bearing micronuclei was determined by fluorescent microscopy. For proliferation inhibition, cells were cultured for 48 h after the irradiation and [3H] thymidine was pulsed for 4 h before harvesting. Dose-rate in the irradiation room was measured with photoluminescence dosimeter. While irradiation time less than 24 h did not affect dose-response curves for both biological responses, they were remarkably attenuated as exposure time increased to more than 7 days. These biological responses were dependent on dose-rate rather than dose when cells were irradiated for 30 days. Moreover, percentage of micronucleus-forming cells cultured continuously for more than 60 days at the constant dose-rate, was gradually decreased in spite of the total dose accumulation. These results suggest that biological responses at low dose-rate, are remarkably affected by exposure time, that they are dependent on dose-rate rather than total dose in the case of long-term irradiation, and that cells are getting resistant to radiation after the continuous irradiation for 2 months. It is necessary to include effect of irradiation time and dose-rate sufficiently to evaluate risk
Hatcher, Gerry; Okuda, Craig
2016-01-01
The effects of climate change on the near shore coastal environment including ocean acidification, accelerated erosion, destruction of coral reefs, and damage to marine habitat have highlighted the need for improved equipment to study, monitor, and evaluate these changes [1]. This is especially true where areas of study are remote, large, or beyond depths easily accessible to divers. To this end, we have developed three examples of low cost and easily deployable real-time ocean observation platforms. We followed a scalable design approach adding complexity and capability as familiarity and experience were gained with system components saving both time and money by reducing design mistakes. The purpose of this paper is to provide information for the researcher, technician, or engineer who finds themselves in need of creating or acquiring similar platforms.
Directory of Open Access Journals (Sweden)
F. Saez de Adana
2009-01-01
Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.
Financing Constraints and Entrepreneurship
William R. Kerr; Ramana Nanda
2009-01-01
Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...
A Temporal Concurrent Constraint Programming Calculus
DEFF Research Database (Denmark)
Palamidessi, Catuscia; Valencia Posso, Frank Darwin
2001-01-01
The tcc model is a formalism for reactive concurrent constraint programming. In this paper we propose a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and non-deterministic timed behavior. We call this tcc extension the ntcc calculus...
Finding the optimal Bayesian network given a constraint graph
Directory of Open Access Journals (Sweden)
Jacob M. Schreiber
2017-07-01
Full Text Available Despite recent algorithmic improvements, learning the optimal structure of a Bayesian network from data is typically infeasible past a few dozen variables. Fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. Several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. While super-structures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. We introduce the concept of a “constraint graph” as an intuitive method for incorporating rich prior knowledge into the structure learning task. We describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. In particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. These subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost.
Aleinikoff, John N.; Slack, John F.; Lund, Karen; Evans, Karl V.; Fanning, C. Mark; Mazdab, Frank K.; Wooden, Joseph L.; Pillers, Renee M.
2012-01-01
The Blackbird district, east-central Idaho, contains the largest known Co reserves in the United States. The origin of strata-hosted Co-Cu ± Au mineralization at Blackbird has been a matter of controversy for decades. In order to differentiate among possible genetic models for the deposits, including various combinations of volcanic, sedimentary, magmatic, and metamorphic processes, we used U-Pb geochronology of xenotime, monazite, and zircon to establish time constraints for ore formation. New age data reported here were obtained using sensitive high resolution ion microprobe (SHRIMP) microanalysis of (1) detrital zircons from a sample of Mesoproterozoic siliciclastic metasedimentary country rock in the Blackbird district, (2) igneous zircons from Mesoproterozoic intrusions, and (3) xenotime and monazite from the Merle and Sunshine prospects at Blackbird. Detrital zircon from metasandstone of the biotite phyllite-schist unit has ages mostly in the range of 1900 to 1600 Ma, plus a few Neoarchean and Paleoproterozoic grains. Age data for the six youngest grains form a coherent group at 1409 ± 10 Ma, regarded as the maximum age of deposition of metasedimentary country rocks of the central structural domain. Igneous zircons from nine samples of megacrystic granite, granite augen gneiss, and granodiorite augen gneiss that crop out north and east of the Blackbird district yield ages between 1383 ± 4 and 1359 ± 7 Ma. Emplacement of the Big Deer Creek megacrystic granite (1377 ± 4 Ma), structurally juxtaposed with host rocks in the Late Cretaceous ca. 5 km north of Blackbird, may have been involved in initial deposition of rare earth elements (REE) minerals and, possibly, sulfides. In situ SHRIMP ages of xenotime and monazite in Co-rich samples from the Merle and Sunshine prospects, plus backscattered electron imagery and SHRIMP analyses of trace elements, indicate a complex sequence of Mesoproterozoic and Cretaceous events. On the basis of textural relationships
Combining experimental and cosmological constraints on heavy neutrinos
Directory of Open Access Journals (Sweden)
Marco Drewes
2017-08-01
Full Text Available We study experimental and cosmological constraints on the extension of the Standard Model by three right handed neutrinos with masses between those of the pion and W boson. We combine for the first time direct, indirect and cosmological constraints in this mass range. This includes experimental constraints from neutrino oscillation data, neutrinoless double β decay, electroweak precision data, lepton universality, searches for rare lepton decays, tests of CKM unitarity and past direct searches at colliders or fixed target experiments. On the cosmological side, big bang nucleosynthesis has the most pronounced impact. Our results can be used to evaluate the discovery potential of searches for heavy neutrinos at LHCb, BELLE II, SHiP, ATLAS, CMS or a future lepton collider.
Discamps, Emmanuel; Jaubert, Jacques; Bachellerie, François
2011-09-01
The evolution in the selection of prey made by past humans, especially the Neandertals and the first anatomically modern humans, has been widely debated. Between Marine Isotope Stages (MIS) 5 and 3, the accuracy of absolute dating is still insufficient to precisely correlate paleoclimatic and archaeological data. It is often difficult, therefore, to estimate to what extent changes in species procurement are correlated with either climate fluctuations or deliberate cultural choices in terms of subsistence behavior. Here, the full development of archeostratigraphy and Bayesian statistical analysis of absolute dates allows the archeological and paleoclimatic chronologies to be compared. The variability in hunted fauna is investigated using multivariate statistical analysis of quantitative faunal lists of 148 assemblages from 39 archeological sequences from MIS 5 through MIS 3. Despite significant intra-technocomplex variability, it is possible to identify major shifts in the human diet during these stages. The integration of archeological data, paleoclimatic proxies and the ecological characteristics of the different species of prey shows that the shifts in large game hunting can be explained by an adaptation of the human groups to climatic fluctuations. However, even if Middle and Early Upper Paleolithic men adapted to changes in their environment and to contrasting landscapes, they ultimately belonged to the ecosystems of the past and were limited by environmental constraints.
DEFF Research Database (Denmark)
Michelsen, Aage U.
2004-01-01
Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....
Dechartres, Agnes; Trinquart, Ludovic; Atal, Ignacio; Moher, David; Dickersin, Kay; Boutron, Isabelle; Perrodeau, Elodie; Altman, Douglas G; Ravaud, Philippe
2017-06-08
Objective To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades. Design Mapping of trials included in Cochrane reviews. Data sources Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data. Data extraction For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors. Main outcomes measures We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively. Results We analysed 20 920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methods also decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment. Conclusions Poor reporting and inadequate methods have decreased over time, especially for sequence generation
Sample, James C.; Torres, Marta E.; Fisher, Andrew; Hong, Wei-Li; Destrigneville, Christine; Defliese, William F.; Tripati, Aradhna E.
2017-02-01
Information about diagenetic processes and temperatures during burial of sediments entering the subduction zone is important for understanding changes in physical properties and seismic behavior during deformation. The geochemistry of authigenic carbonates from accretionary prisms can serve as proxies for conditions during carbonate cementation and resultant lithification. We report results from the Nankai accretionary prism recovered from Integrated Ocean Drilling Program (IODP) sites C0011 and C0012 and we document continued cementation of deep sediment sections prior to subduction. Elemental and isotope data provide evidence for complex mixing of different isotopic reservoirs in pore waters contributing to carbonate chemical signatures. Carbon stable isotope values exhibit a broad range (δ13CV-PDB = +0.1‰ to -22.5‰) that corresponds to different stages of cement formation during burial. Carbonate formation temperatures from carbonate-clumped isotope geochemistry range from 16 °C to 63 °C at Site C0011 and 8.7 °C to 68 °C at Site C0012. The correspondence between the clumped-isotope temperatures and extrapolations of measured in situ temperatures indicate the carbonate is continuing to form at present. Calculated water isotopic compositions are in some cases enriched in 18O relative to measured interstitial waters suggesting a component of inherited seawater or input from clay-bound water. Low oxygen isotope values and the observed Ba/Ca ratios are also consistent with carbonate cementation at depth. Strontium isotopes of interstitial waters (87Sr/86Sr of 0.7059-0.7069) and carbonates (87Sr/86Sr of 0.70715-0.70891) support formation of carbonates from a mixture of strontium reservoirs including current interstitial waters and relic seawater contemporaneous with deposition. Collectively our data reflect mixed sources of dissolved inorganic carbon and cations that include authigenic phases driven by organic carbon and volcanic alteration reactions
International Nuclear Information System (INIS)
Napier, R.W.; Guise, P.G.; Rex, D.C.
1998-01-01
The Southern Cross Greenstone Belt in Western Australia contains structurally controlled, hydrothermal gold deposits which are thought to have formed at or near the peak of amphibolite facies regional metamorphism during the Late Archaean. Although the geological features of deposits in the area are well documented. conflicting genetic models and ore-fluid sources have been used to explain the observed geological data. This paper presents new 40 Ar/ 39 Ar data which suggest that the thermal history of the Southern Cross area after the peak of regional metamorphism was more complex than has previously been suggested. After the main gold mineralisation event prior to ca 2620 Ma, the 40 Ar/ 39 Ar ages from amphiboles and biotites sampled from the alteration selvages of gold-bearing veins indicate that temperatures remained elevated in the region of 500 deg C for between 20 and 70 million years. These amphiboles and biotites from individual deposits yield ages that are in good agreement with one another to a high precision. implying increased cooling rates after the long period of elevated temperatures. Along the Southern Cross Greenstone Belt. however. amphibole-biotite pairs from the alteration selvages of gold-bearing quartz veins. while remaining in good agreement with one another, vary between deposits from ca 2560 Ma to ca 2440 Ma. Amphiboles from metabasalts that are associated with regional metamorphism and not hydrothermal alteration contain numerous exsolution lamellae that reduce the effective closure temperature of the amphiboles and yield geologically meaningless ages. These age relationships show that the thermal history of the area did not follow a simple cooling path and the area may have been tectonically active for a long period after the main gold mineralisation event before ca 2620 Ma. Such data may provide important constraints on subsequent genetic modelling of gold mineralisation and metamorphism. Copyright (1998) Blackwell Science Asia
Stocker, Benjamin David; Yu, Zicheng; Massa, Charly; Joos, Fortunat
2017-02-14
CO 2 emissions from preindustrial land-use change (LUC) are subject to large uncertainties. Although atmospheric CO 2 records suggest only a small land carbon (C) source since 5,000 y before present (5 kyBP), the concurrent C sink by peat buildup could mask large early LUC emissions. Here, we combine updated continuous peat C reconstructions with the land C balance inferred from double deconvolution analyses of atmospheric CO 2 and [Formula: see text]C at different temporal scales to investigate the terrestrial C budget of the Holocene and the last millennium and constrain LUC emissions. LUC emissions are estimated with transient model simulations for diverging published scenarios of LU area change and shifting cultivation. Our results reveal a large terrestrial nonpeatland C source after the Mid-Holocene (66 [Formula: see text] 25 PgC at 7-5 kyBP and 115 [Formula: see text] 27 PgC at 5-3 kyBP). Despite high simulated per-capita CO 2 emissions from LUC in early phases of agricultural development, humans emerge as a driver with dominant global C cycle impacts only in the most recent three millennia. Sole anthropogenic causes for particular variations in the CO 2 record ([Formula: see text]20 ppm rise after 7 kyBP and [Formula: see text]10 ppm fall between 1500 CE and 1600 CE) are not supported. This analysis puts a strong constraint on preindustrial vs. industrial-era LUC emissions and suggests that upper-end scenarios for the extent of agricultural expansion before 1850 CE are not compatible with the C budget thereafter.
Directory of Open Access Journals (Sweden)
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
Technology for planning and scheduling under complex constraints
Alguire, Karen M.; Pedro Gomes, Carla O.
1997-02-01
Within the context of law enforcement, several problems fall into the category of planning and scheduling under constraints. Examples include resource and personnel scheduling, and court scheduling. In the case of court scheduling, a schedule must be generated considering available resources, e.g., court rooms and personnel. Additionally, there are constraints on individual court cases, e.g., temporal and spatial, and between different cases, e.g., precedence. Finally, there are overall objectives that the schedule should satisfy such as timely processing of cases and optimal use of court facilities. Manually generating a schedule that satisfies all of the constraints is a very time consuming task. As the number of court cases and constraints increases, this becomes increasingly harder to handle without the assistance of automatic scheduling techniques. This paper describes artificial intelligence (AI) technology that has been used to develop several high performance scheduling applications including a military transportation scheduler, a military in-theater airlift scheduler, and a nuclear power plant outage scheduler. We discuss possible law enforcement applications where we feel the same technology could provide long-term benefits to law enforcement agencies and their operations personnel.
Tomlinson, Jennifer
2006-01-01
This paper examines the apparently paradoxical notion that women "choose" part-time work when it is consistently documented as being less preferential in employment terms, conditions and prospects when compared to full-time work. Forming a dialogue with Hakim's (2000) preference theory, it is proposed here that four dimensions--care…
Resources, constraints and capabilities
Dhondt, S.; Oeij, P.R.A.; Schröder, A.
2018-01-01
Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures
Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.
2016-07-01
The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.
Dynamics and causality constraints
International Nuclear Information System (INIS)
Sousa, Manoelito M. de
2001-04-01
The physical meaning and the geometrical interpretation of causality implementation in classical field theories are discussed. Causality in field theory are kinematical constraints dynamically implemented via solutions of the field equation, but in a limit of zero-distance from the field sources part of these constraints carries a dynamical content that explains old problems of classical electrodynamics away with deep implications to the nature of physicals interactions. (author)
Vidal-Acuña, M Reyes; Ruiz-Pérez de Pipaón, Maite; Torres-Sánchez, María José; Aznar, Javier
2017-12-08
An expanded library of matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has been constructed using the spectra generated from 42 clinical isolates and 11 reference strains, including 23 different species from 8 sections (16 cryptic plus 7 noncryptic species). Out of a total of 379 strains of Aspergillus isolated from clinical samples, 179 strains were selected to be identified by sequencing of beta-tubulin or calmodulin genes. Protein spectra of 53 strains, cultured in liquid medium, were used to construct an in-house reference database in the MALDI-TOF MS. One hundred ninety strains (179 clinical isolates previously identified by sequencing and the 11 reference strains), cultured on solid medium, were blindy analyzed by the MALDI-TOF MS technology to validate the generated in-house reference database. A 100% correlation was obtained with both identification methods, gene sequencing and MALDI-TOF MS, and no discordant identification was obtained. The HUVR database provided species level (score of ≥2.0) identification in 165 isolates (86.84%) and for the remaining 25 (13.16%) a genus level identification (score between 1.7 and 2.0) was obtained. The routine MALDI-TOF MS analysis with the new database, was then challenged with 200 Aspergillus clinical isolates grown on solid medium in a prospective evaluation. A species identification was obtained in 191 strains (95.5%), and only nine strains (4.5%) could not be identified at the species level. Among the 200 strains, A. tubingensis was the only cryptic species identified. We demonstrated the feasibility and usefulness of the new HUVR database in MALDI-TOF MS by the use of a standardized procedure for the identification of Aspergillus clinical isolates, including cryptic species, grown either on solid or liquid media. © The Author 2017. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For
Momentum constraint relaxation
International Nuclear Information System (INIS)
Marronetti, Pedro
2006-01-01
Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly
International Nuclear Information System (INIS)
Ott, S.H.
1992-01-01
This dissertation uses the real options framework to study the valuation and optimal investment policies for R and D projects. The models developed integrate and extend the literature by taking into account the unique characteristics of such projects including uncertain investment in R and D, time-to-build, and multiple investment opportunities. The models were developed to examine the optimal R and D investment policy for the Lunar Helium-3 fusion project but have general applicability. Models are development which model R and D investment as an information gathering process where R and D investment remaining changes as investment is expended. The value of the project increased as the variance of required investment increases. An extension of this model combines a stochastic benefit with stochastic investment. Both the value of the R and D project and the region prescribing continued investment increased. The policy implications are significant: When uncertainty of R and D investment is ignored, the value of the project is underestimated and a tendency toward underinvestment in R and D will result; the existence of uncertainty in R and D investment will cause R and D projects to experience larger declines in value before discontinuation of investment. The model combining stochastic investment with the stochastic benefit is applied to the Lunar Helium-3 fusion project. Investment in fusion should continue at the maximum level of $1 billion annually given current levels of costs of alternative fuels and the perceived uncertainty of R and D investment in the project. A model is developed to examine the valuation and optimal split of funding between R and D projects when there are two competing new technologies. Without interaction between research expenditures and benefits across technologies, the optimal investment strategy is to invest in one or the other technology or neither. The multiple technology model is applied to analyze competing R and D projects, namely
International Nuclear Information System (INIS)
Pop, L.A.M.; Broek, J.F.C.M. van den; Visser, A.G.; Kogel, A.J. van der
1996-01-01
Using theoretical models based on radiobiological principles for the design of new treatment schedules for HDR and PDR brachytherapy, it is important to realise the impact of assumptions regarding the kinetics of repair. Extrapolations based on longer repair half times in a continuous LDR reference scheme may lead to the calculation of dangerously high doses for alternative HDR and PDR treatment schedules. We used the clinical experience obtained with conventional ERT and LDR brachytherapy in head and neck cancer as a clinical guideline to check the impact of the radiobiological parameters used. Biologically equivalent dose (BED) values for the in clinical practice of LDR brachytherapy recommended dose of 65-70 Gy (prescribed at a dose rate between 30-50 cGy/h) are calculated as a function of the repair half time. These BED values are compared with the biological effect of a clinical reference dose of conventional ERT with 2 Gy/day and complete repair between the fractions. From this comparison of LDR and ERT treatment schedules, a range of values for the repair half times of acute or late responding tissues is demarcated with a reasonable fit to the clinical data. For the acute effects (or tumor control) the best fits are obtained for repair half times of about 0.5 h, while for late effects the repair half times are at least 1 h and can be as high as 3 h. Within these ranges of repair half times for acute and late effects, the outcome of 'alternative' HDR or PDR treatment schedules are discussed. It is predominantly the late reacting normal tissue with the longer repair half time for which problems will be encountered and no or only marginal gain is to be expected of decreasing the dose rate per pulse in PDR brachytherapy
Extended Set Constraints and Tree Grammar Abstraction of Programs
DEFF Research Database (Denmark)
Rosendahl, Mads; Gallagher, John Patrick
2011-01-01
Set constraints are relations between sets of ground terms or trees. This paper presents two main contributions: firstly we consider an extension of the systems of set constraints to include a tuple constructor, and secondly we construct a simplified solution procedure for set constraints. We...
A compendium of chameleon constraints
International Nuclear Information System (INIS)
Burrage, Clare; Sakstein, Jeremy
2016-01-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.
A compendium of chameleon constraints
Energy Technology Data Exchange (ETDEWEB)
Burrage, Clare [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104 (United States)
2016-11-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.
International Nuclear Information System (INIS)
Coleman, R.A.; Korte, H.
1984-01-01
According to the principle of the universality of free fall, the motions of all neutral monopole particles are governed by one common path structure. This principle does not, however, require the path structure to be geodesic; that is, the path structure need not be a projective structure. It is shown that any equation of motion structure (either a curve or a path structure) that has sufficient microisotropy to be compatible with the conformal causal structure of space-time must be geodesic and must be unique. Hence, the empirically well-supported principles of conformal causality and of the universality of free fall together require the existence of a unique Weyl structure on space-time
Coulson, Ian M.; Villeneuve, Mike E.; Dipple, Gregory M.; Duncan, Robert A.; Russell, James K.; Mortensen, James K.
2002-05-01
Knowledge of the time-scales of emplacement and thermal history during assembly of composite felsic plutons in the shallow crust are critical to deciphering the processes of crustal growth and magma chamber development. Detailed petrological and chemical study of the mid-Cretaceous, composite Emerald Lake pluton, from the northern Canadian Cordillera, Yukon Territory, coupled with U-Pb and 40Ar/ 39Ar geochronology, indicates that this pluton was intruded as a series of magmatic pulses. Intrusion of these pulses produced a strong petrological zonation from augite syenite, hornblende quartz syenite and monzonite, to biotite granite. Our data further indicate that multiple phases were emplaced and cooled to below the mineral closure temperatures over a time-scale on the order of the resolution of the 40Ar/ 39Ar technique (˜1 Myr), and that emplacement occurred at 94.3 Ma. Simple thermal modelling and heat conduction calculations were used to further constrain the temporal relationships within the intrusion. These calculations are consistent with the geochronology and show that emplacement and cooling were complete in less than 100 kyr and probably 70±5 kyr. These results demonstrate that production, transport and emplacement of the different phases of the Emerald Lake pluton occurred essentially simultaneously, and that these processes must also have been closely related in time and space. By analogy, these results provide insights into the assembly and petrogenesis of other complex intrusions and ultimately lead to an understanding of the processes involved in crustal development.
Energy Technology Data Exchange (ETDEWEB)
Danisik, Martin; Frisch, Wolfgang [Tuebingen Univ. (Germany). Inst. of Geosciences; Panek, Tomas [Ostrava Univ. (Czech Republic). Dept. of Physical Geography and Geoecology; Matysek, Dalibor [Technical Univ. of Ostrava (Czech Republic). Dept. of Geological Engineering; Dunkl, Istvan [Geoscience Center Goettingen (Germany). Sedimentology and Environmental Geology
2008-09-15
The age of planation surfaces in the Podbeskydska pahorkatina Upland in the Outer Western Carpathians (OWC, Czech Republic) is constrained by low-temperature thermochronological dating methods for the first time. Our apatite fission track and apatite (U-Th)/He data measured on teschenite intrusions show that planation surfaces in the study area formed in post-Pannonian time (>7.1 Ma) and are therefore younger than traditionally believed. This contradicts the classical concepts, which stipulate that a large regional planation surface of Pannonian age (the so-called ''midmountain level'') developed in the whole Western Carpathians. Geodynamic implications of our data are the following: (i) the investigated Tesin and Godula nappes of the OWC were buried and thermally overprinted in the accretionary wedge in different ways, and consequently experienced different cooling histories. This indicates a dynamic basin setting with an active accretionary process in a subduction zone; (ii) accretionary processes in the OWC were active already during Late Eocene times. (orig.)
Blackburn, T. J.; Olsen, P. E.; Bowring, S. A.; McLean, N. M.; Kent, D. V.; Puffer, J. H.; McHone, G.; Rasbury, T.
2012-12-01
Mass extinction events that punctuate Earth's history have had a large influence on the evolution, diversity and composition of our planet's biosphere. The approximate temporal coincidence between the five major extinction events over the last 542 million years and the eruption of Large Igneous Provinces (LIPs) has led to the speculation that climate and environmental perturbations generated by the emplacement of a large volume of magma in a short period of time triggered each global biologic crisis. Establishing a causal link between extinction and the onset and tempo of LIP eruption has proved difficult because of the geographic separation between LIP volcanic deposits and stratigraphic sequences preserving evidence of the extinction. In most cases, the uncertainties on available radioisotopic dates used to correlate between geographically separated study areas often exceed the duration of both the extinction interval and LIP volcanism by an order of magnitude. The "end-Triassic extinction" (ETE) is one of the "big five" and is characterized by the disappearance of several terrestrial and marine species and dominance of Dinosaurs for the next 134 million years. Speculation on the cause has centered on massive climate perturbations thought to accompany the eruption of flood basalts related to the Central Atlantic Magmatic Province (CAMP), the most aerially extensive and volumetrically one of the largest LIPs on Earth. Despite an approximate temporal coincidence between extinction and volcanism, there lacks evidence placing the eruption of CAMP prior to or at the initiation of the extinction. Estimates of the timing and/or duration of CAMP volcanism provided by astrochronology and Ar-Ar geochronology differ by an order of magnitude, precluding high-precision tests of the relationship between LIP volcanism and the mass extinction, the causes of which are dependent upon the rate of magma eruption. Here we present high precision zircon U-Pb ID-TIMS geochronologic data
International Nuclear Information System (INIS)
Lepetit-Coiffe, Matthieu; Quesson, Bruno; Moonen, Chrit T.W.; Laumonier, Herve; Trillaud, Herve; Seror, Olivier; Sesay, Musa-Bahazid; Grenier, Nicolas
2010-01-01
To assess the practical feasibility and effectiveness of real-time magnetic resonance (MR) temperature monitoring for the radiofrequency (RF) ablation of liver tumours in a clinical setting, nine patients (aged 49-87 years, five men and four women) with one malignant tumour (14-50 mm, eight hepatocellular carcinomas and one colorectal metastasis), were treated by 12-min RF ablation using a 1.5-T closed magnet for real-time temperature monitoring. The clinical monopolar RF device was filtered at 64 MHz to avoid electromagnetic interference. Real-time computation of thermal-dose (TD) maps, based on Sapareto and Dewey's equation, was studied to determine its ability to provide a clear end-point of the RF procedure. Absence of local recurrence on follow-up MR images obtained 45 days after the RF ablation was used to assess the apoptotic and necrotic prediction obtained by real-time TD maps. Seven out of nine tumours were completely ablated according to the real-time TD maps. Compared with 45-day follow-up MR images, TD maps accurately predicted two primary treatment failures, but were not relevant in the later progression of one case of secondary local tumour. The real-time TD concept is a feasible and promising monitoring method for the RF ablation of liver tumours. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Lepetit-Coiffe, Matthieu; Quesson, Bruno; Moonen, Chrit T.W. [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Laumonier, Herve; Trillaud, Herve [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service de Radiologie, Hopital Saint-Andre, CHU Bordeaux, Bordeaux (France); Seror, Olivier [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service de Radiologie, Hopital Jean Verdier, Bondy (France); Sesay, Musa-Bahazid [Service d' Anesthesie Reanimation III, Hopital Pellegrin, CHU Bordeaux, Bordeaux (France); Grenier, Nicolas [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service d' Imagerie Diagnostique et Therapeutique de l' Adulte, Hopital Pellegrin, CHU Bordeaux, Bordeaux (France)
2010-01-15
To assess the practical feasibility and effectiveness of real-time magnetic resonance (MR) temperature monitoring for the radiofrequency (RF) ablation of liver tumours in a clinical setting, nine patients (aged 49-87 years, five men and four women) with one malignant tumour (14-50 mm, eight hepatocellular carcinomas and one colorectal metastasis), were treated by 12-min RF ablation using a 1.5-T closed magnet for real-time temperature monitoring. The clinical monopolar RF device was filtered at 64 MHz to avoid electromagnetic interference. Real-time computation of thermal-dose (TD) maps, based on Sapareto and Dewey's equation, was studied to determine its ability to provide a clear end-point of the RF procedure. Absence of local recurrence on follow-up MR images obtained 45 days after the RF ablation was used to assess the apoptotic and necrotic prediction obtained by real-time TD maps. Seven out of nine tumours were completely ablated according to the real-time TD maps. Compared with 45-day follow-up MR images, TD maps accurately predicted two primary treatment failures, but were not relevant in the later progression of one case of secondary local tumour. The real-time TD concept is a feasible and promising monitoring method for the RF ablation of liver tumours. (orig.)
International Nuclear Information System (INIS)
Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria
2005-01-01
The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography
Psychological constraints on egalitarianism
DEFF Research Database (Denmark)
Kasperbauer, Tyler Joshua
2015-01-01
processes motivating people to resist various aspects of egalitarianism. I argue for two theses, one normative and one descriptive. The normative thesis holds that egalitarians must take psychological constraints into account when constructing egalitarian ideals. I draw from non-ideal theories in political...... philosophy, which aim to construct moral goals with current social and political constraints in mind, to argue that human psychology must be part of a non-ideal theory of egalitarianism. The descriptive thesis holds that the most fundamental psychological challenge to egalitarian ideals comes from what......Debates over egalitarianism for the most part are not concerned with constraints on achieving an egalitarian society, beyond discussions of the deficiencies of egalitarian theory itself. This paper looks beyond objections to egalitarianism as such and investigates the relevant psychological...
Liu, Bo; Han, Bao-Fu; Chen, Jia-Fu; Ren, Rong; Zheng, Bo; Wang, Zeng-Zhen; Feng, Li-Xia
2017-12-01
The Junggar-Balkhash Ocean was a major branch of the southern Paleo-Asian Ocean. The timing of its closure is important for understanding the history of the Central Asian Orogenic Belt. New sedimentological and geochronological data from the Late Paleozoic volcano-sedimentary sequences in the Barleik Mountains of West Junggar, NW China, help to constrain the closure time of the Junggar-Balkhash Ocean. Tielieketi Formation (Fm) is dominated by littoral sediments, but its upper glauconite-bearing sandstone is interpreted to deposit rapidly in a shallow-water shelf setting. By contrast, Heishantou Fm consists chiefly of volcanic rocks, conformably overlying or in fault contact with Tielieketi Fm. Molaoba Fm is composed of parallel-stratified fine sandstone and sandy conglomerate with graded bedding, typical of nonmarine, fluvial deposition. This formation unconformably overlies the Tielieketi and Heishantou formations and is conformably covered by Kalagang Fm characterized by a continental bimodal volcanic association. The youngest U-Pb ages of detrital zircons from sandstones and zircon U-Pb ages from volcanic rocks suggest that the Tielieketi, Heishantou, Molaoba, and Kalagang formations were deposited during the Famennian-Tournaisian, Tournaisian-early Bashkirian, Gzhelian, and Asselian-Sakmarian, respectively. The absence of upper Bashkirian to Kasimovian was likely caused by tectonic uplifting of the West Junggar terrane. This is compatible with the occurrence of coeval stitching plutons in the West Junggar and adjacent areas. The Junggar-Balkhash Ocean should be finally closed before the Gzhelian, slightly later or concurrent with that of other ocean domains of the southern Paleo-Asian Ocean.
Electrometry - constraints and benefits
International Nuclear Information System (INIS)
Sabol, J.
1980-01-01
The main parameters are defined and described of an electrometer, including input resistance, input quiescent current, current noise equivalent, voltage and current stability, minimum input capacity, response time, time constant, range, accuracy, linearity, a-c component suppression, and zero drift. The limiting factors in measurement mainly include temperature noise, insulator quality, radioactivity background, electrostatic and electromagnetic interference, contact potential difference, and resistor stability. Electrometers are classified into three basic groups, viz., electrostatic electrometers, d-c amplifier-based electrometers (electron tube electrometers and FET electrometers), electrometers with modulation of measured signal (electrometers using vibration capacitors, electrometers with varactors). Diagrams and specifications are presented for selected electrometers. (J.B.)
DEFF Research Database (Denmark)
Bourdakis, Eleftherios; Olesen, Bjarne W.; Grossule, Fabio
Night sky radiative cooling technology using PhotoVoltaic/Thermal panels (PVT) and night time ventilation have been studied both by means of simulations and experiments to evaluate their potential and to validate the created simulation model used to describe it. An experimental setup has been...... depending on the sky clearness. This cooling power was enough to remove the stored heat and regenerate the ceiling panels. The validation simulation model results related to PCM were close to the corresponding results extracted from the experiment, while the results related to the production of cold water...... through the night sky radiative cooling differed significantly. The possibility of night time ventilation was studied through simulations for three different latitudes. It was concluded that for Danish climatic conditions night time ventilation would also be able to regenerate the panels while its...
2007-11-01
281.4 -281.2 -281.0 MJD 54270.0 to 54277.0 (June 2007) MJD 53767.0 to 53773.0 (Feb 2006) S ag na c de la y N IC T to P TB (n s) days from MJD...standards in Europe and the US at the 10-15 uncertainty level,” Metrologia , 43, 109-120. [2] D. Piester, A. Bauch, L. Breakiron, D. Matsakis, B...Blanzano, and O. Koudelka, 2008, “Time transfer with nanosecond accuracy for the realization of International Atomic Time,” submitted to Metrologia
2015-12-01
whereas VFA suffers inaccuracies due to an assumption about FA. In this article , we propose an efficient method to tackle the quantification of T1 and...from a reduced number of VFA SPGR measurements and a gain in T1 precision from simultaneous least squares fitting. As we confirmed in this article ...Locker DR. Time saving in measurement of NMR and EPR relaxation times. Rev Sci Instrum 1970;41:250–251. 3. Shah NJ, Zaitsev M, Steinhoff S, Zilles K. A new
Spruce, Joseph; Hargrove, William W.; Gasser, Gerald; Norman, Steve
2013-01-01
U.S. forests occupy approx.1/3 of total land area (approx. 304 million ha). Since 2000, a growing number of regionally evident forest disturbances have occurred due to abiotic and biotic agents. Regional forest disturbances can threaten human life and property, bio-diversity and water supplies. Timely regional forest disturbance monitoring products are needed to aid forest health management work. Near Real Time (NRT) twice daily MODIS NDVI data provide a means to monitor U.S. regional forest disturbances every 8 days. Since 2010, these NRT forest change products have been produced and posted on the US Forest Service ForWarn Early Warning System for Forest Threats.
Isocurvature constraints on portal couplings
Energy Technology Data Exchange (ETDEWEB)
Kainulainen, Kimmo; Nurmi, Sami; Vaskonen, Ville [Department of Physics, University of Jyväskylä, P.O.Box 35 (YFL), FI-40014 University of Jyväskylä (Finland); Tenkanen, Tommi; Tuominen, Kimmo, E-mail: kimmo.kainulainen@jyu.fi, E-mail: sami.t.nurmi@jyu.fi, E-mail: tommi.tenkanen@helsinki.fi, E-mail: kimmo.i.tuominen@helsinki.fi, E-mail: ville.vaskonen@jyu.fi [Department of Physics, University of Helsinki P.O. Box 64, FI-00014, Helsinki (Finland)
2016-06-01
We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We then use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: m {sub DM}/GeV ∼< 0.2λ{sub s}{sup 3/8} ( H {sub *}/10{sup 11} GeV){sup −3/2}. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.
University Course Timetabling using Constraint Programming
Directory of Open Access Journals (Sweden)
Hadi Shahmoradi
2017-03-01
Full Text Available University course timetabling problem is a challenging and time-consuming task on the overall structure of timetable in every academic environment. The problem deals with many factors such as the number of lessons, classes, teachers, students and working time, and these are influenced by some hard and soft constraints. The aim of solving this problem is to assign courses and classes to teachers and students, so that the restrictions are held. In this paper, a constraint programming method is proposed to satisfy maximum constraints and expectation, in order to address university timetabling problem. For minimizing the penalty of soft constraints, a cost function is introduced and AHP method is used for calculating its coefficients. The proposed model is tested on department of management, University of Isfahan dataset using OPL on the IBM ILOG CPLEX Optimization Studio platform. A statistical analysis has been conducted and shows the performance of the proposed approach in satisfying all hard constraints and also the satisfying degree of the soft constraints is on maximum desirable level. The running time of the model is less than 20 minutes that is significantly better than the non-automated ones.
Energy Technology Data Exchange (ETDEWEB)
Aubert, B.
2005-04-19
We present a measurement of the time-dependent CP-violating asymmetries in decays of neutral B mesons to the final states D*{sup {-+}}{pi}{sup {+-}}, using approximately 232 million B{bar B} events recorded by the BABAR experiment at the PEP-II e{sup +}e{sup -} storage ring. Events containing these decays are selected with a partial reconstruction technique, in which only the high-momentum {pi}{sup {+-}} from the B decay and the low-momentum {pi}{sup {-+}} from the D*{sup {-+}} decay are used. We measure the parameters related to 2{beta} + {gamma} to be a{sub D*{pi}} = -0.034 {+-} 0.014 {+-} 0.009 and c{sub D*{pi}}{sup {ell}} = -0.019 {+-} 0.022 {+-} 0.013. With some theoretical assumptions, we interpret our results in terms of the lower limits |sin(2{beta} + {gamma})| > 0.62 (0.35) at 68% (90%) confidence level.
Cipolloni, Marco; Kaleta, Jiří; Mašát, Milan; Dron, Paul I; Shen, Yongqiang; Zhao, Ke; Rogers, Charles T; Shoemaker, Richard K; Michl, Josef
2015-04-23
We examine the fluorescence anisotropy of rod-shaped guests held inside the channels of tris( o -phenylenedioxy)cyclotriphosphazene (TPP) host nanocrystals, characterized by powder X-ray diffraction and solid state NMR spectroscopy. We address two issues: (i) are light polarization measurements on an aqueous colloidal solution of TPP nanocrystals meaningful, or is depolarization by scattering excessive? (ii) Can measurements of the rotational mobility of the included guests be performed at low enough loading levels to suppress depolarization by intercrystallite energy transfer? We find that meaningful measurements are possible and demonstrate that the long axis of molecular rods included in TPP channels performs negligible vibrational motion.
Institute of Scientific and Technical Information of China (English)
沈碧波; 佘维; 叶阳东; 贾利民
2016-01-01
本文针对城轨列车车门开门控制系统故障诊断问题,定义一种时间约束自控Petri网(TC-CPN),提出一种故障溯因诊断方法.该方法主要依据捕获的故障表征和监控系统采集的状态与时间信息进行诊断分析.该Petri网引入时序一致性判定函数,定义相对应的变迁触发规则.通过对捕获的时间信息进行计算和溯因推理,分析系统可能存在的故障源或隐患.应用算例表明,时间约束自控Petri网对城轨列车车门开门控制系统的建模有效,时间约束为系统故障溯因诊断提供了时间层面的有力证据.该方法对于此类控制系统故障诊断问题具有一定的普适性.%Aiming at the fault diagnosis of on-off control system of urban rail train doors ,an abductive fault di-agnosis method was proposed by defining a novel type of Cyber Petri Net of Time Constraint (TC-CPN) .Diag-nostic analysis was carried out with the proposed method ,based on the captured fault characterization and the status and time information collected by the monitoring system .The Petri net adopted the judgment function of temporal consistency and defined the corresponding transition firing rules .By calculation and abductive reason-ing of the captured time information ,the fault origins or the potential problems that may remain in the control system were analyzed .The sample application showed that the TC-CPN was effective to model the on-off con-trol system of the doors of urban rail trains ,while time constraint provided strong evidence for fault abductive diagnosis in terms of time .The proposed method suggested general applicability to such fault diagnosis prob-lems of control systems .
Constraint-based scheduling applying constraint programming to scheduling problems
Baptiste, Philippe; Nuijten, Wim
2001-01-01
Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...
Vitale, E. J.; Gifford, J.; Platt, B. F.
2017-12-01
The Upper Cretaceous Ripley Formation is present throughout the Mississippi (MS) Embayment and contains local bentonite lenses related to regional volcanism. The Pontotoc bentonite is such a lens located near the town of Pontotoc, MS, that was strip-mined and has not been accessible since reclamation of the land. Recent investigations in Pontotoc County south of the Pontotoc bentonite site resulted in the discovery of a previously unknown bentonite bed. Litho- and biostratigraphy indicate that the bentonite is younger than known volcanism from MS. The purposes of the present investigation are 1) to test whether the new bentonite bed is correlative to the Pontotoc bentonite & 2) to recover volcanogenic zircons for U-Pb dating to better constrain timing of volcanism and chronostratigraphy of the Ripley Fm. Outcrops in an active sand pit in the field area expose 2.5 m of fine sand, and an upper gradational contact with an overlying 2.5 m of sandy clay, containing the bentonite bed. Two trenches were excavated through the outcrop, and in each trench a stratigraphic section was measured and bulk samples collected for zircons. Sampling began in the lower bounding sand and continued upsection in 1 m intervals, corresponding to the gradational contact with the bentonite, and 2 locations within the bentonite. The Ripley Fm. consists of 73 m of fossiliferous clay, sand, and calcareous sand beds. Recent stratigraphic revisions of the lateral facies in MS recognize a lower transitional clay facies, a limestone, marl, and calcareous sand facies, a sandy upper Ripley facies, and the formally named Chiwapa Sandstone Member. Ammonite biostratigraphy places the contact between the Chiwapa and the overlying Owl Creek/Prairie Bluff at 68.5 Ma. Unlike the mined area north of Pontotoc where the bentonite is within the Chiwapa, the bed here is directly above the Chiwapa section and its upper contact represents the Ripley Fm. / Owl Creek Fm. contact. Where the bentonite is present, it
Energy Technology Data Exchange (ETDEWEB)
Lynch, Ryan S.; Kaspi, Victoria M.; Archibald, Anne M.; Karako-Argaman, Chen [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada); Boyles, Jason; Lorimer, Duncan R.; McLaughlin, Maura A.; Cardoso, Rogerio F. [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States); Ransom, Scott M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Stairs, Ingrid H.; Berndsen, Aaron; Cherry, Angus; McPhee, Christie A. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Hessels, Jason W. T.; Kondratiev, Vladislav I.; Van Leeuwen, Joeri [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands); Epstein, Courtney R. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Pennucci, Tim [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Roberts, Mallory S. E. [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States); Stovall, Kevin, E-mail: rlynch@physics.mcgill.ca [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)
2013-02-15
We have completed a 350 MHz Drift-scan Survey using the Robert C. Byrd Green Bank Telescope with the goal of finding new radio pulsars, especially millisecond pulsars that can be timed to high precision. This survey covered {approx}10,300 deg{sup 2} and all of the data have now been fully processed. We have discovered a total of 31 new pulsars, 7 of which are recycled pulsars. A companion paper by Boyles et al. describes the survey strategy, sky coverage, and instrumental setup, and presents timing solutions for the first 13 pulsars. Here we describe the data analysis pipeline, survey sensitivity, and follow-up observations of new pulsars, and present timing solutions for 10 other pulsars. We highlight several sources-two interesting nulling pulsars, an isolated millisecond pulsar with a measurement of proper motion, and a partially recycled pulsar, PSR J0348+0432, which has a white dwarf companion in a relativistic orbit. PSR J0348+0432 will enable unprecedented tests of theories of gravity.
Directory of Open Access Journals (Sweden)
Gökçen Uysal
2018-03-01
Full Text Available Optimal control of reservoirs is a challenging task due to conflicting objectives, complex system structure, and uncertainties in the system. Real time control decisions suffer from streamflow forecast uncertainty. This study aims to use Probabilistic Streamflow Forecasts (PSFs having a lead-time up to 48 h as input for the recurrent reservoir operation problem. A related technique for decision making is multi-stage stochastic optimization using scenario trees, referred to as Tree-based Model Predictive Control (TB-MPC. Deterministic Streamflow Forecasts (DSFs are provided by applying random perturbations on perfect data. PSFs are synthetically generated from DSFs by a new approach which explicitly presents dynamic uncertainty evolution. We assessed different variables in the generation of stochasticity and compared the results using different scenarios. The developed real-time hourly flood control was applied to a test case which had limited reservoir storage and restricted downstream condition. According to hindcasting closed-loop experiment results, TB-MPC outperforms the deterministic counterpart in terms of decreased downstream flood risk according to different independent forecast scenarios. TB-MPC was also tested considering different number of tree branches, forecast horizons, and different inflow conditions. We conclude that using synthetic PSFs in TB-MPC can provide more robust solutions against forecast uncertainty by resolution of uncertainty in trees.
International Nuclear Information System (INIS)
Alwis, S.P. de
2016-01-01
We discuss constraints on KKLT/KKLMMT and LVS scenarios that use anti-branes to get an uplift to a deSitter vacuum, coming from requiring the validity of an effective field theory description of the physics. We find these are not always satisfied or are hard to satisfy.
Ecosystems emerging. 5: Constraints
Czech Academy of Sciences Publication Activity Database
Patten, B. C.; Straškraba, Milan; Jorgensen, S. E.
2011-01-01
Roč. 222, č. 16 (2011), s. 2945-2972 ISSN 0304-3800 Institutional research plan: CEZ:AV0Z50070508 Keywords : constraint * epistemic * ontic Subject RIV: EH - Ecology, Behaviour Impact factor: 2.326, year: 2011 http://www.sciencedirect.com/science/article/pii/S0304380011002274
DEFF Research Database (Denmark)
Dove, Graham; Biskjær, Michael Mose; Lundqvist, Caroline Emilie
2017-01-01
groups of students building three models each. We studied groups building with traditional plastic bricks and also using a digital environment. The building tasks students undertake, and our subsequent analysis, are informed by the role constraints and ambiguity play in creative processes. Based...
Mueller, Andreas G.
2015-02-01
The Kalgoorlie district in the Archean Yilgarn Craton, Western Australia, comprises two world-class gold deposits: Mt Charlotte (144 t Au produced to 2013) in the northwest and the Golden Mile (1,670 t Au) in the southeast. Both occur in a folded greenschist-facies gabbro sill adjacent to the Golden Mile Fault (D2) in propylitic alteration associated with porphyry dikes. At Mt Charlotte, a shear array of fault-fill veins within the Golden Mile Fault indicates sinistral strike-slip during Golden Mile-type pyrite-telluride mineralization. The pipe-shaped Charlotte quartz vein stockwork, mined in bulk more than 1 km down plunge, is separated in time by barren D3 thrusts from Golden Mile mineralization and alteration, and occurs between two dextral strike-slip faults (D4). Movement on these faults generated an organized network of extension and shear fractures opened during the subsequent infiltration of high-pressure H2S-rich fluid at 2,655 ± 13 Ma (U-Pb xenotime). Gold was deposited during wall rock sulphidation in overlapping vein selvages zoned from deep albite-pyrrhotite (3 g/t Au) to upper muscovite-pyrite assemblages (5 g/t Au bulk grade). Chlorite and fluid inclusion thermometry indicate that this kilometre-scale zonation is due to fluid cooling from 410-440 °C at the base to 350-360 °C at the top of the orebody, while the greenstone terrane remained at 250 °C ambient temperature and at 300 MPa lithostatic pressure. The opened fractures filled with barren quartz and scheelite during the retrograde stage (300 °C) of the hydrothermal event. During fracture sealing, fluid flux was periodically restricted at the lower D3 thrust. Cycles of high and low up-flow, represented by juvenile H2O-CO2 and evolved H2O-CO2-CH4 fluid, respectively, are recorded by the REE and Sr isotope compositions of scheelite oscillatory zones. The temperature gradient measured in the vein stockwork points to a hot (>600 °C) fluid source 2-4 km below the mine workings, and several
Ootes, Luke; Goff, Steve; Jackson, Valerie A.; Gleeson, Sarah A.; Creaser, Robert A.; Samson, Iain M.; Evensen, Norman; Corriveau, Louise; Mumin, A. Hamid
2010-08-01
The timing of Cu-Mo-U mineralisation at the Nori/RA prospect in the Paleoproterozoic Great Bear magmatic zone has been investigated using Re-Os molybdenite and 40Ar-39Ar biotite geochronology. The Re-Os molybdenite ages presented are the first robust sulphide mineralisation ages derived from the Great Bear magmatic zone. Cu-Mo-U mineralisation is hosted in early to syn-deformational hydrothermal veins consisting of quartz and K-feldspar or more commonly tourmaline-biotite-quartz-K-feldspar, with associated wall-rock alteration assemblages being predominantly biotite. Sulphide and oxide minerals consist of chalcopyrite, molybdenite and uraninite with lesser pyrite and magnetite. Elevated light rare earth elements and tungsten concentrations associated with the Cu-Mo-U mineralisation have also been reported at the prospect by previous workers. Molybdenite and uraninite occur intimately in dravitic tourmaline growth zones and at grain margins, attesting to their syngenetic nature (with respect to hydrothermal veining). Two molybdenite separates yield Re-Os model ages of 1,874.4 ± 8.7 (2 σ) and 1,872.4 ± 8.8 Ma (2 σ) with a weighted average model age of 1,873.4 ± 6.1 Ma (2 σ). Laser step heating of biotite from the marginal alteration of the wall-rock adjacent to the veins yields a 40Ar-39Ar maximum cooling age of 1,875 ± 8 Ma (MSWD = 3.8; 2 σ), indistinguishable from the Re-Os molybdenite model age and a previously dated ‘syn-tectonic’ aplitic dyke in the region. Dravitic tourmaline hosts abundant primary liquid-vapour-solid-bearing fluid inclusions. Analytical results indicate liquid-vapour homogenisation at >260°C constraining the minimum temperature of mineralisation. The solids, which are possibly trapped, did not homogenise with the liquid-vapour by 400°C. Salinities in the inclusions are variable. Raman spectra identify that at least some of the solids are calcite and anhydrite. Raman spectra also confirm the vapour phases contain some CO2; whereas
Habler, Gerlinde; Thöni, Martin; Grasemann, Bernhard; Sölva, Helmuth; Cotza, Gianluca
2010-05-01
) of dynamically recrystallized plagioclase and quartz. Shear kinematics of Dc1 alternate between Top to WNW or ESE. Sc2 foliation planes and the lithological-tectonic OSC/SMU boundary dip with intermediate angles towards N - NNW but also bear a W-plunging stretching lineation (LSc2). Dc2 structures consistently indicate W-directed shear kinematics. Due to the angular relationship of Sc1 and Sc2 the lithological boundaries of the OSC are truncated at the boundary with the SMU. Cretaceous Rb-Sr isochrons were obtained from Bt-granite-gneiss about 400m structurally above the OSC/SMU boundary. Fine-grained muscovite forming part of the major foliation Sc1 yielded a Rb-Sr Ms-WR age of 86.1 ± 0.9 Ma interpreted as a crystallization age constraining the timing of Dc1. The evidence of isotopic equilibration was supported by the homogeneous major element Ms composition. Rb-Sr Bt-WR data from the same material yielded 80.8±0.8 Ma interpreted to reflect cooling below c. 300°C. Several Rb-Sr Bt-WR data obtained from the Ferwalltal area gave age-results between 80.0 and 84.7 Ma and thus range among numerous Bt-WR Rb-Sr ages available from the wider study area (Thöni and Hoinkes 1987). Both deformation stages Dc1 and Dc2 predate this cooling period, as the Qtz-mica-fabrics demand significantly higher T-conditions. Subsequent deformation covers strongly partitioned brittle-ductile shear zones dipping with 50 - 60° to NW, as well as ultra-cataclasites bearing pseudotachylites, which reactivated Sc1- or Sc2 planes about 50 - 70 meters above the OSC/SMU boundary. Both brittle-ductile and brittle structures showed W-directed kinematics of normal faulting. The continuation of consistent shear kinematics to the brittle regime, as well as the extensive database of mica ages indicating cooling to below c. 300°C in the OSC adjacent to the SMU between 85 - 80 Ma (Thöni and Hoinkes 1987, with references) contradict a model of Oligocene ductile refolding. References: Fügenschuh B, Fl
Constraint theory multidimensional mathematical model management
Friedman, George J
2017-01-01
Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...
Constraint-based Attribute and Interval Planning
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
DEFF Research Database (Denmark)
Tanderup, Kari; Fokdal, Lars Ulrik; Sturdza, Alina
2016-01-01
-center patient series (retroEMBRACE). Materials and methods This study analyzed 488 locally advanced cervical cancer patients treated with external beam radiotherapy ± chemotherapy combined with IGABT. Brachytherapy contouring and reporting was according to ICRU/GEC-ESTRO recommendations. The Cox Proportional...... Hazards model was applied to analyze the effect on local control of dose-volume metrics as well as overall treatment time (OTT), dose rate, chemotherapy, and tumor histology. Results With a median follow up of 46 months, 43 local failures were observed. Dose (D90) to the High Risk Clinical Target Volume...
International Nuclear Information System (INIS)
Phillips, D.; Fergusson, C.L.
1999-01-01
mellange and intensely cleaved slate have been analysed and have 40 Ar/ 39 Ar integrated ages of 422 ± 2 Ma (K/Ar: 424 ± 5 Ma) and 415 ± 2 Ma (K/Ar: 400 ± 5 Ma). The general consistency of these results accompanied by microstructural observation indicating a low abundance of detrital mica, show that in these samples recoil and inheritance problems appear to be less important. Thus they provide a more reliable upper constraint on the timing the regional deformation on the south coast of New South Wales, i.e. younger than ca. 420 Ma, consistent with previously recognised regional structural constraints. Elsewhere in the Lachlan Fold Belt 40 Ar/ 39 Ar ages on fine-grained slates have been used to provide concise constraints on the timing of deformation. The current results raise serious questions about the interpretation of these ages as representing on-going deformation and therefore tectonic models derived from these data should be treated with caution. Copyright (1999) Geological Society of Australia
Krishna, K. S.; Ismaiel, M.; Karlapati, S.; Saha, D.; Mishra, J.
2014-12-01
Analysis of marine magnetic data of the Bay of Bengal (BOB) led to suggest two different tectonic models for the evolution of lithosphere between India and East Antarctica. The first model explains the presence of M-series (M11 to M0) magnetic anomalies in BOB with a small room leaving for accommodating the crust evolved during the long Cretaceous Magnetic Quiet Period. Second model explains in other way that most part of the crust in BOB was evolved during the quite period together with the possible presence of oldest magnetic chron M1/ M0 in close vicinity of ECMI. It is with this perspective we have reinvestigated the existing and recently acquired magnetic data together with regional magnetic model of BOB for identification of new tectonic constraints, thereby to better understand the evolution of lithosphere. Analysis of magnetic data revealed the presence of spreading anomalies C33 and C34 in the vicinity of 8°N, and internal time marker (Q1) corresponding to the age 92 Ma at 12°N in a corridor between 85°E and Ninetyeast ridges. The new time marker and its location, indeed, become a point of reference and benchmark in BOB for estimating the age of oceanic crust towards ECMI. The magnetic model further reveals the presence of network of fracture zones (FZs) with different orientations. Between 85°E and Ninetyeast ridges, two near N-S FZs, approximately followed 87°E and 89.5°E are found to extend into BOB up to 12°N, from there the FZs reorient in N60°W direction and reach to the continental margin region. Along ECMI two sets of FZs are identified with a northern set oriented in N60°W and southern one in N40°W direction. This suggests that both north and south segments of the ECMI were evolved in two different tectonic settings. The bend in FZs marks the timing (92 Ma) of occurrence of first major plate reorganisation of the Indian Ocean and becomes a very critical constraint for understanding the plate tectonic process in early opening of the
Taniguchi, H
1998-01-01
This article describes the US and Japan's "Common Agenda for Cooperation in Global Perspective." This agenda was launched in July 1993. The aim was to use a bilateral partnership to address critical global challenges in 1) Promotion of Health and Human Development; 2) Protection of the Environment; 3) Responses to Challenges to Global Stability; and 4) Advancement of Science and Technology. The bilateral effort has resulted in 18 initiatives worldwide. Six major accomplishments have occurred in coping with natural disasters in Kobe, Japan, and Los Angeles, US; coral reefs; assistance for women in developing countries; AIDS, children's health; and population problems. The bilateral effort has been successful due to the active involvement of the private sector, including businesses and nongovernmental organizations (NGOs). Many initiatives are developed and implemented in cooperation with local NGOs. The government needs the private sector's technical and managerial fields of expertise. Early investment in NGO efforts ensures the development of self-sustaining programs and public support. An Open Forum was held in March 12-13, 1998, as a commemoration of the 5-year cooperative bilateral effort. Over 300 people attended the Forum. Plenary sessions were devoted to the partnership between public and private sectors under the US-Japan Agenda. Working sessions focused on health and conservation. Participants suggested improved legal systems and social structures for facilitating activities of NGOs, further development by NGOs of their capacities, and support to NGOs from corporations.
Energy Technology Data Exchange (ETDEWEB)
Andersson, Christer [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Johansson, Aasa [SWECO, Stockholm (Sweden)
2002-12-01
Thirteen experimental deposition holes similar to those in the present KBS-3 design have been bored at the Aespoe Hard Rock Laboratory, Oskarshamn, Sweden. The objective with the boring program was to test and demonstrate the current technique for boring of large vertical holes in granitic rock. Conclusions and results from this project is used in the planning process for the deposition holes that will be bored in the real repository for spent nuclear fuel. The boreholes are also important for three major projects. The Prototype Repository, the Canister Retrieval Test and the Demonstration project will all need full-scale deposition holes for their commissioning. The holes are bored in full scale and have a radius of 1.75 m and a depth of 8.5 m. To bore the holes an existing TBM design was modified to produce a novel type Shaft Boring Machine (SBM) suitable for boring 1.75 m diameter holes from a relatively small tunnel. The cutter head was equipped with two types of roller cutters: two row carbide button cutters and disc cutters. Removal of the cuttings was made with a vacuum suction system. The boring was monitored and boring parameters recorded by a computerised system for the evaluation of the boring performance. During boring of four of the holes temperature, stress and strain measurements were performed. Acoustic emission measurements were also performed during boring of these four holes. The results of these activities will not be discussed in this report since they are reported separately. Criteria regarding nominal borehole diameter, deviation of start and end centre point, surface roughness and performance of the machine were set up according to the KBS-3 design and were fulfilled with a fair margin. The average total time for boring one deposition hole during this project was 105 hours.
International Nuclear Information System (INIS)
Andersson, Christer; Johansson, Aasa
2002-12-01
Thirteen experimental deposition holes similar to those in the present KBS-3 design have been bored at the Aespoe Hard Rock Laboratory, Oskarshamn, Sweden. The objective with the boring program was to test and demonstrate the current technique for boring of large vertical holes in granitic rock. Conclusions and results from this project is used in the planning process for the deposition holes that will be bored in the real repository for spent nuclear fuel. The boreholes are also important for three major projects. The Prototype Repository, the Canister Retrieval Test and the Demonstration project will all need full-scale deposition holes for their commissioning. The holes are bored in full scale and have a radius of 1.75 m and a depth of 8.5 m. To bore the holes an existing TBM design was modified to produce a novel type Shaft Boring Machine (SBM) suitable for boring 1.75 m diameter holes from a relatively small tunnel. The cutter head was equipped with two types of roller cutters: two row carbide button cutters and disc cutters. Removal of the cuttings was made with a vacuum suction system. The boring was monitored and boring parameters recorded by a computerised system for the evaluation of the boring performance. During boring of four of the holes temperature, stress and strain measurements were performed. Acoustic emission measurements were also performed during boring of these four holes. The results of these activities will not be discussed in this report since they are reported separately. Criteria regarding nominal borehole diameter, deviation of start and end centre point, surface roughness and performance of the machine were set up according to the KBS-3 design and were fulfilled with a fair margin. The average total time for boring one deposition hole during this project was 105 hours
Directory of Open Access Journals (Sweden)
Mehrdad Gholami
2015-07-01
Full Text Available Introduction In radiography, dose and image quality are dependent on radiographic parameters. The problem is caused from incorrect use of radiography equipment and from the radiation exposure to patients much more than required. Therefore, the aim of this study was to implement a quality-control program to detect changes in exposure parameters, which may affect diagnosis or patient radiation dose. Materials and Methods This cross-sectional study was performed on seven stationary X-ray units in sixhospitals of Lorestan province. The measurements were performed, using a factory-calibrated Barracuda dosimeter (model: SE-43137. Results According to the results, the highest output was obtained in A Hospital (M1 device, ranging from 107×10-3 to 147×10-3 mGy/mAs. The evaluation of tube voltage accuracy showed a deviation from the standard value, which ranged between 0.81% (M1 device and 17.94% (M2 device at A Hospital. The deviation ranges at other hospitals were as follows: 0.30-27.52% in B Hospital (the highest in this study, 8.11-20.34% in C Hospital, 1.68-2.58% in D Hospital, 0.90-2.42% in E Hospital and 0.10-1.63% in F Hospital. The evaluation of exposure time accuracy showed that E, C, D and A (M2 device hospitals complied with the requirements (allowing a deviation of ±5%, whereas A (M1 device, F and B hospitals exceeded the permitted limit. Conclusion The results of this study showed that old X-ray equipments with poor or no maintenance are probably the main sources of reducing radiographic image quality and increasing patient radiation dose.
Graphical constraints: a graphical user interface for constraint problems
Vieira, Nelson Manuel Marques
2015-01-01
A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embrace...
Adaptive laser link reconfiguration using constraint propagation
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Given, D.
2002-12-01
The California Integrated Seismic Network (CISN) Display is part of a Web-enabled earthquake notification system alerting users in near real-time of seismicity, and also valuable geophysical information following a large earthquake. It will replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering graphical earthquake information to users at emergency operations centers, and other organizations. Features distinguishing the CISN Display from other GUI tools are a state-full client/server relationship, a scalable message format supporting automated hyperlink creation, and a configurable platform-independent client with a GIS mapping tool; supporting the decision-making activities of critical users. The CISN Display is the front-end of a client/server architecture known as the QuakeWatch system. It is comprised of the CISN Display (and other potential clients), message queues, server, server "feeder" modules, and messaging middleware, schema and generators. It is written in Java, making it platform-independent, and offering the latest in Internet technologies. QuakeWatch's object-oriented design allows components to be easily upgraded through a well-defined set of application programming interfaces (APIs). Central to the CISN Display's role as a gateway to other earthquake products is its comprehensive XML-schema. The message model starts with the CUBE message format, but extends it by provisioning additional attributes for currently available products, and those yet to be considered. The supporting metadata in the XML-message provides the data necessary for the client to create a hyperlink and associate it with a unique event ID. Earthquake products deliverable to the CISN Display are ShakeMap, Ground Displacement, Focal Mechanisms, Rapid Notifications, OES Reports, and Earthquake Commentaries. Leveraging the power of the XML-format, the CISN Display provides prompt access to
Generalized Pauli constraints in small atoms
Schilling, Christian; Altunbulak, Murat; Knecht, Stefan; Lopes, Alexandre; Whitfield, James D.; Christandl, Matthias; Gross, David; Reiher, Markus
2018-05-01
The natural occupation numbers of fermionic systems are subject to nontrivial constraints, which include and extend the original Pauli principle. A recent mathematical breakthrough has clarified their mathematical structure and has opened up the possibility of a systematic analysis. Early investigations have found evidence that these constraints are exactly saturated in several physically relevant systems, e.g., in a certain electronic state of the beryllium atom. It has been suggested that, in such cases, the constraints, rather than the details of the Hamiltonian, dictate the system's qualitative behavior. Here, we revisit this question with state-of-the-art numerical methods for small atoms. We find that the constraints are, in fact, not exactly saturated, but that they lie much closer to the surface defined by the constraints than the geometry of the problem would suggest. While the results seem incompatible with the statement that the generalized Pauli constraints drive the behavior of these systems, they suggest that the qualitatively correct wave-function expansions can in some systems already be obtained on the basis of a limited number of Slater determinants, which is in line with numerical evidence from quantum chemistry.
Future Cosmological Constraints From Fast Radio Bursts
Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus
2018-03-01
We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.
Physical activity participation and constraints among athletic training students.
Stanek, Justin; Rogers, Katherine; Anderson, Jordan
2015-02-01
Researchers have examined the physical activity (PA) habits of certified athletic trainers; however, none have looked specifically at athletic training students. To assess PA participation and constraints to participation among athletic training students. Cross-sectional study. Entry-level athletic training education programs (undergraduate and graduate) across the United States. Participants were 1125 entry-level athletic training students. Self-reported PA participation, including a calculated PA index based on a typical week. Leisure constraints and demographic data were also collected. Only 22.8% (252/1105) of athletic training students were meeting the American College of Sports Medicine recommendations for PA through moderate-intensity cardiorespiratory exercise. Although 52.3% (580/1105) were meeting the recommendations through vigorous-intensity cardiorespiratory exercise, 60.5% (681/1125) were meeting the recommendations based on the combined total of moderate or vigorous cardiorespiratory exercise. In addition, 57.2% (643/1125) of respondents met the recommendations for resistance exercise. Exercise habits of athletic training students appear to be better than the national average and similar to those of practicing athletic trainers. Students reported structural constraints such as lack of time due to work or studies as the most significant barrier to exercise participation. Athletic training students experienced similar constraints to PA participation as practicing athletic trainers, and these constraints appeared to influence their exercise participation during their entry-level education. Athletic training students may benefit from a greater emphasis on work-life balance during their entry-level education to promote better health and fitness habits.
Efficient Searching with Linear Constraints
DEFF Research Database (Denmark)
Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff
2000-01-01
We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...
Deepening Contractions and Collateral Constraints
DEFF Research Database (Denmark)
Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano
and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...
The perceived constraints, motivation, and physical activity levels of ...
African Journals Online (AJOL)
The purpose of this research was threefold; Are Korean youth physically active to promote health during leisure time? What constraints to physical active do youth experience during leisure time? Are there relationships among constraints, motivation, and physical activity level? Of 1 280 youth randomly selected by a ...
[The Nature and Issues of Drug Addiction Treatment under Constraint].
Quirion, Bastien
This article is exploring different forms of constraint that are exerted in the field of drug addiction treatment. The objective of this article is to establish benchmarks and to stimulate reflection about the ethical and clinical implications of those constraints in the field of drug addiction treatment. This article is presenting a critical review of different forms of constraint that can be exerted in Canada in regard to the treatment of drug addiction. In the first section of the article, a definition of therapeutic intervention is proposed, that includes the dimension of power, which justifies the importance of considering the coercive aspects of treatment. The second section, which represents the core section of the paper, is devoted to the presentation of different levels of constraint that can be distinguished in regard to drug addicts who are under treatment. Three levels of constraint are exposed: judicial constraint, institutional constraint and relational constraint. The coercive aspect of treatment can then be recognized as a combination of all tree levels of constraint. Judicial constraint refers to any form of constraint in which the court or the judge is imposing or recommending treatment. This particular level of constraint can take different forms, such as therapeutic remands, conditions of a probation order, conditions of a conditional sentence of imprisonment, and coercive treatment such as the ones provided through drug courts. Institutional constraint refers to any form of constraint exerted within any institutional setting, such as correctional facilities and programs offered in community. Correctional facilities being limited by their own specific mission, it might have a major impact on the way the objectives of treatment are defined. Those limitations can then be considered as a form of constraint, in which drug users don't have much space to express their personal needs. Finally, relational constraint refers to any form of constraint in
An Extensive Evaluation of Portfolio Approaches for Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Roberto Amadini
2016-06-01
Full Text Available In the context of Constraint Programming, a portfolio approach exploits the complementary strengths of a portfolio of different constraint solvers. The goal is to predict and run the best solver(s of the portfolio for solving a new, unseen problem. In this work we reproduce, simulate, and evaluate the performance of different portfolio approaches on extensive benchmarks of Constraint Satisfaction Problems. Empirical results clearly show the benefits of portfolio solvers in terms of both solved instances and solving time.
Searching for genomic constraints
Energy Technology Data Exchange (ETDEWEB)
Lio` , P [Cambridge, Univ. (United Kingdom). Genetics Dept.; Ruffo, S [Florence, Univ. (Italy). Fac. di Ingegneria. Dipt. di Energetica ` S. Stecco`
1998-01-01
The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call `genomic constraints` from the rules that depend on the `external natural selection` acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour.
Searching for genomic constraints
International Nuclear Information System (INIS)
Lio', P.; Ruffo, S.
1998-01-01
The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call 'genomic constraints' from the rules that depend on the 'external natural selection' acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour
Management practices and production constraints of central ...
African Journals Online (AJOL)
management practices of central highland goats and their major constraints. ... tance to improve the goat production potential and livelihood of the farmers in the study ... ing the productivity and income from keeping goats, there is a study gap in ..... and day time, possibly increasing the chance of getting contagious diseases.
Data Driven Constraints for the SVM
DEFF Research Database (Denmark)
Darkner, Sune; Clemmensen, Line Katrine Harder
2012-01-01
We propose a generalized data driven constraint for support vector machines exemplified by classification of paired observations in general and specifically on the human ear canal. This is particularly interesting in dynamic cases such as tissue movement or pathologies developing over time. Assum...
On the canonical treatment of Lagrangian constraints
International Nuclear Information System (INIS)
Barbashov, B.M.
2001-01-01
The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a special Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge
On the canonical treatment of Lagrangian constraints
International Nuclear Information System (INIS)
Barbashov, B.M.
2001-01-01
The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a specific Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge
Constraints and Creativity in NPD - Testing the Impact of 'Late Constraints'
DEFF Research Database (Denmark)
Onarheim, Balder; Valgeirsdóttir, Dagný
experiment was conducted, involving 12 teams of industrial designers from three different countries, each team working on two 30 minutes design tasks. In one condition all constraints were given at the start, and in the other one new radical constraint was added after 12 minutes. The output from all 24 tasks......The aim of the presented work is to investigate how the timing of project constraints can influence the creativity of the output in New Product Development (NPD) projects. When seeking to produce a creative output, is it beneficial to know all constraints when initiating a project...... was assessed for creativity using the Consensual Assessment Technique (CAT), and a comparative within-subjects analysis found no significant different between the two conditions. Controlling for task and assessor a small but non-significant effect was found, in favor of the ‘late constraint’ condition. Thus...
Diffusion Processes Satisfying a Conservation Law Constraint
Directory of Open Access Journals (Sweden)
J. Bakosi
2014-01-01
Full Text Available We investigate coupled stochastic differential equations governing N nonnegative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires a set of fluctuating variables to be nonnegative and (if appropriately normalized sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the nonnegativity and the unit-sum conservation law constraints are satisfied as the variables evolve in time. We investigate the consequences of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.
Faddeev-Jackiw quantization and constraints
International Nuclear Information System (INIS)
Barcelos-Neto, J.; Wotzasek, C.
1992-01-01
In a recent Letter, Faddeev and Jackiw have shown that the reduction of constrained systems into its canonical, first-order form, can bring some new insight into the research of this field. For sympletic manifolds the geometrical structure, called Dirac or generalized bracket, is obtained directly from the inverse of the nonsingular sympletic two-form matrix. In the cases of nonsympletic manifolds, this two-form is degenerated and cannot be inverted to provide the generalized brackets. This singular behavior of the sympletic matrix is indicative of the presence of constraints that have to be carefully considered to yield to consistent results. One has two possible routes to treat this problem: Dirac has taught us how to implement the constraints into the potential part (Hamiltonian) of the canonical Lagrangian, leading to the well-known Dirac brackets, which are consistent with the constraints and can be mapped into quantum commutators (modulo ordering terms). The second route, suggested by Faddeev and Jackiw, and followed in this paper, is to implement the constraints directly into the canonical part of the first order Lagrangian, using the fact that the consistence condition for the stability of the constrained manifold is linear in the time derivative. This algorithm may lead to an invertible two-form sympletic matrix from where the Dirac brackets are readily obtained. This algorithm is used in this paper to investigate some aspects of the quantization of constrained systems with first- and second-class constraints in the sympletic approach
Latin hypercube sampling with inequality constraints
International Nuclear Information System (INIS)
Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.
2010-01-01
In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)
Effective constraint algebras with structure functions
International Nuclear Information System (INIS)
Bojowald, Martin; Brahma, Suddhasattwa
2016-01-01
This article presents the result that fluctuations and higher moments of a state, by themselves, do not imply quantum corrections in structure functions of constrained systems. Moment corrections are isolated from other types of quantum effects, such as factor-ordering choices and regularization, by introducing a new condition with two parts: (i) having a direct (or faithful) quantization of the classical structure functions, (ii) free of factor-ordering ambiguities. In particular, it is assumed that the classical constraints can be quantized in an anomaly free way, so that properties of the resulting constraint algebras can be derived. If the two-part condition is not satisfied, effective constraints can still be evaluated, but quantum effects may be stronger. Consequences for canonical quantum gravity, whose structure functions encode space–time structure, are discussed. In particular, deformed algebras found in models of loop quantum gravity provide reliable information even in the Planck regime. (paper)
Simulating non-holonomic constraints within the LCP-based simulation framework
DEFF Research Database (Denmark)
Ellekilde, Lars-Peter; Petersen, Henrik Gordon
2006-01-01
be incorporated directly, and derive formalism for how the non-holonomic contact constraints can be modelled as a combination of non-holonomic equality constraints and ordinary contacts constraints. For each of these three we are able to guarantee solvability, when using Lemke's algorithm. A number of examples......In this paper, we will extend the linear complementarity problem-based rigid-body simulation framework with non-holonomic constraints. We consider three different types of such, namely equality, inequality and contact constraints. We show how non-holonomic equality and inequality constraints can...... are included to demonstrate the non-holonomic constraints. Udgivelsesdato: Marts...
Confluence Modulo Equivalence in Constraint Handling Rules
DEFF Research Database (Denmark)
Christiansen, Henning; Kirkeby, Maja Hanne
2015-01-01
Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the advantages of confluence, which include various optimization techniques and simplified...
Constraint-induced movement therapy after stroke
Kwakkel, G.; Veerbeek, J.M.; van Wegen, E.E.H.; Wolf, S.L.
2015-01-01
Constraint-induced movement therapy (CIMT) was developed to overcome upper limb impairments after stroke and is the most investigated intervention for the rehabilitation of patients. Original CIMT includes constraining of the non-paretic arm and task-oriented training. Modified versions also apply
Affordability Constraints in Major Defense Acquisitions
2016-11-01
memo, does not provide a detailed recipe for those who must produce quantitative affordability constraints. Enclosure 8 of the January 7, 2015 version...3.0’s full title includes “Achieving Dominant Capabilities 2015 Lot 2028 Lot 2038 Lot $0 $100 $200 $300 $400 $ 500 $600 $700 $800 $900 0 10000 20000
Confluence Modulo Equivalence in Constraint Handling Rules
DEFF Research Database (Denmark)
Christiansen, Henning; Kirkeby, Maja Hanne
2014-01-01
Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the ad- vantages of confluence, which include various optimization techniques and simplified...
Supergravity constraints on monojets
International Nuclear Information System (INIS)
Nandi, S.
1986-01-01
In the standard model, supplemented by N = 1 minimal supergravity, all the supersymmetric particle masses can be expressed in terms of a few unknown parameters. The resulting mass relations, and the laboratory and the cosmological bounds on these superpartner masses are used to put constraints on the supersymmetric origin of the CERN monojets. The latest MAC data at PEP excludes the scalar quarks, of masses up to 45 GeV, as the origin of these monojets. The cosmological bounds, for a stable photino, excludes the mass range necessary for the light gluino-heavy squark production interpretation. These difficulties can be avoided by going beyond the minimal supergravity theory. Irrespective of the monojets, the importance of the stable γ as the source of the cosmological dark matter is emphasized
Lorentz violation. Motivation and new constraints
International Nuclear Information System (INIS)
Liberati, S.; Maccione, L.
2009-09-01
We review the main theoretical motivations and observational constraints on Planck scale sup-pressed violations of Lorentz invariance. After introducing the problems related to the phenomenological study of quantum gravitational effects, we discuss the main theoretical frameworks within which possible departures from Lorentz invariance can be described. In particular, we focus on the framework of Effective Field Theory, describing several possible ways of including Lorentz violation therein and discussing their theoretical viability. We review the main low energy effects that are expected in this framework. We discuss the current observational constraints on such a framework, focusing on those achievable through high-energy astrophysics observations. In this context we present a summary of the most recent and strongest constraints on QED with Lorentz violating non-renormalizable operators. Finally, we discuss the present status of the field and its future perspectives. (orig.)
Lorentz violation. Motivation and new constraints
Energy Technology Data Exchange (ETDEWEB)
Liberati, S. [Scuola Internazionale Superiore di Studi Avanzati SISSA, Trieste (Italy); Istituto Nazionale di Fisica Nucleare INFN, Sezione di Trieste (Italy); Maccione, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2009-09-15
We review the main theoretical motivations and observational constraints on Planck scale sup-pressed violations of Lorentz invariance. After introducing the problems related to the phenomenological study of quantum gravitational effects, we discuss the main theoretical frameworks within which possible departures from Lorentz invariance can be described. In particular, we focus on the framework of Effective Field Theory, describing several possible ways of including Lorentz violation therein and discussing their theoretical viability. We review the main low energy effects that are expected in this framework. We discuss the current observational constraints on such a framework, focusing on those achievable through high-energy astrophysics observations. In this context we present a summary of the most recent and strongest constraints on QED with Lorentz violating non-renormalizable operators. Finally, we discuss the present status of the field and its future perspectives. (orig.)
Constraints and spandrels of interareal connectomes
Rubinov, Mikail
2016-12-01
Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls.
Comparison of a constraint directed search to a genetic algorithm in a scheduling application
International Nuclear Information System (INIS)
Abbott, L.
1993-01-01
Scheduling plutonium containers for blending is a time-intensive operation. Several constraints must be taken into account; including the number of containers in a dissolver run, the size of each dissolver run, and the size and target purity of the blended mixture formed from these runs. Two types of algorithms have been used to solve this problem: a constraint directed search and a genetic algorithm. This paper discusses the implementation of these two different approaches to the problem and the strengths and weaknesses of each algorithm
Block Pickard Models for Two-Dimensional Constraints
DEFF Research Database (Denmark)
Forchhammer, Søren; Justesen, Jørn
2009-01-01
In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....
THE MEASURES OF CONSTRAINT IN THE INTERNATIONAL LAW
Directory of Open Access Journals (Sweden)
Dumitriţa FLOREA
2013-12-01
Full Text Available For being addressee of the state international responsibility, the entities guilty of the trigger of an conflict or by of the commit of an fact through it’s bring touch to the international public order, must have the quality of the subject of international public law or to be participant to an report of law like this, knowing that the reports which it’s settle between the entities which actions in the international society are considered the international relationships. The relationships which are established between the subjects of international law are falling under the international public law. The constraints is an element of international law which does not constitute an violation, but an mean of achievement of the law. The base element of the constraint is legality, including from the point of view of foundation, method and the volume. The constraint is determine, first of all by the purpose and base principles of the international law. The countermeasure are limited through the temporary a groundless of the obligations by the injured states, face to the guilty state and are considered legal until it will be achieved their purpose. They must have applied in a sort way to permit re-establish of the application of obligations infringe. This rule has to do with Convention of Vienna from 1969 regarding the treaties law, according to “in the time of abeyance period, the parties must abstain from any deeds which will tend to impedes the resumption of applying the treaty”
Minimal Flavor Constraints for Technicolor
DEFF Research Database (Denmark)
Sakuma, Hidenori; Sannino, Francesco
2010-01-01
We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...
Social Constraints on Animate Vision
National Research Council Canada - National Science Library
Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian
2000-01-01
.... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...
The effects of perceived leisure constraints among Korean university students
Sae-Sook Oh; Sei-Yi Oh; Linda L. Caldwell
2002-01-01
This study is based on Crawford, Jackson, and Godbey's model of leisure constraints (1991), and examines the relationships between the influences of perceived constraints, frequency of participation, and health status in the context of leisure-time outdoor activities. The study was based on a sample of 234 Korean university students. This study provides further...
On squaring the primary constraints in a generalized Hamiltonian dynamics
International Nuclear Information System (INIS)
Nesterenko, V.V.
1993-01-01
Consideration of the model of the relativistic particle with curvature and torsion in the three-dimensional space-time shows that the squaring of the primary constraints entails a wrong result. The complete set of the Hamiltonian constraints arising here corresponds to another model with an action similar but not identical with the initial action. 16 refs
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
Modifier constraint in alkali borophosphate glasses using topological constraint theory
Energy Technology Data Exchange (ETDEWEB)
Li, Xiang [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zeng, Huidan, E-mail: hdzeng@ecust.edu.cn [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Jiang, Qi [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zhao, Donghui [Unifrax Corporation, Niagara Falls, NY 14305 (United States); Chen, Guorong [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Wang, Zhaofeng; Sun, Luyi [Department of Chemical & Biomolecular Engineering and Polymer Program, Institute of Materials Science, University of Connecticut, Storrs, CT 06269 (United States); Chen, Jianding [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China)
2016-12-01
In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.
Constraints to sustainable development of rubber industry in Nigeria ...
African Journals Online (AJOL)
The findings revealed that lack of inputs, lack of credit facilities, high production cost are some of the constraints to sustainable development. The study also revealed among others the causes of the constraints, which include high lending interest rate, diversion of loan by farmers, low exchange rate of the Naira and old age ...
A PID autotuner utilizing GPC and constraint optimization
DEFF Research Database (Denmark)
Henningsen, Arne; Christensen, Anders; Ravn, Ole
1990-01-01
A solution to the PID autotuning problem is presented which involves constraining the parameters of a discrete second-order discrete-time controller. The integrator is forced into the regulator by using a CARIMA model. The discrete-time regulator parameters are calculated by optimizing...... a generalized predictive control criterion, and the PID structure is ensured by constraining the parameters to a feasible set defined by the discrete-time Euler approximation of the ideal continuous-time PID controller. The algorithm is extended by incorporating constraints on the amplitude and slew......-rate of the control signal. Simulation studies for a system of coupled tanks have indicated that the method performs well, and that signal limitations can be included in a straightforward manner...
Optimal Stopping with Information Constraint
International Nuclear Information System (INIS)
Lempa, Jukka
2012-01-01
We study the optimal stopping problem proposed by Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002). In this maximization problem of the expected present value of the exercise payoff, the underlying dynamics follow a linear diffusion. The decision maker is not allowed to stop at any time she chooses but rather on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002), solve this problem in the case where the underlying is a geometric Brownian motion and the payoff function is of American call option type. In the current study, we propose a mild set of conditions (covering the setup of Dupuis and Wang in Adv. Appl. Probab. 34:141–157, 2002) on both the underlying and the payoff and build and use a Markovian apparatus based on the Bellman principle of optimality to solve the problem under these conditions. We also discuss the interpretation of this model as optimal timing of an irreversible investment decision under an exogenous information constraint.
Shaping tissues by balancing active forces and geometric constraints
Foolen, Jasper; Yamashita, Tadahiro; Kollmannsberger, Philip
2016-02-01
The self-organization of cells into complex tissues during growth and regeneration is a combination of physical-mechanical events and biochemical signal processing. Cells actively generate forces at all stages in this process, and according to the laws of mechanics, these forces result in stress fields defined by the geometric boundary conditions of the cell and tissue. The unique ability of cells to translate such force patterns into biochemical information and vice versa sets biological tissues apart from any other material. In this topical review, we summarize the current knowledge and open questions of how forces and geometry act together on scales from the single cell to tissues and organisms, and how their interaction determines biological shape and structure. Starting with a planar surface as the simplest type of geometric constraint, we review literature on how forces during cell spreading and adhesion together with geometric constraints impact cell shape, stress patterns, and the resulting biological response. We then move on to include cell-cell interactions and the role of forces in monolayers and in collective cell migration, and introduce curvature at the transition from flat cell sheets to three-dimensional (3D) tissues. Fibrous 3D environments, as cells experience them in the body, introduce new mechanical boundary conditions and change cell behaviour compared to flat surfaces. Starting from early work on force transmission and collagen remodelling, we discuss recent discoveries on the interaction with geometric constraints and the resulting structure formation and network organization in 3D. Recent literature on two physiological scenarios—embryonic development and bone—is reviewed to demonstrate the role of the force-geometry balance in living organisms. Furthermore, the role of mechanics in pathological scenarios such as cancer is discussed. We conclude by highlighting common physical principles guiding cell mechanics, tissue patterning and
Shaping tissues by balancing active forces and geometric constraints
International Nuclear Information System (INIS)
Foolen, Jasper; Yamashita, Tadahiro; Kollmannsberger, Philip
2016-01-01
The self-organization of cells into complex tissues during growth and regeneration is a combination of physical–mechanical events and biochemical signal processing. Cells actively generate forces at all stages in this process, and according to the laws of mechanics, these forces result in stress fields defined by the geometric boundary conditions of the cell and tissue. The unique ability of cells to translate such force patterns into biochemical information and vice versa sets biological tissues apart from any other material. In this topical review, we summarize the current knowledge and open questions of how forces and geometry act together on scales from the single cell to tissues and organisms, and how their interaction determines biological shape and structure. Starting with a planar surface as the simplest type of geometric constraint, we review literature on how forces during cell spreading and adhesion together with geometric constraints impact cell shape, stress patterns, and the resulting biological response. We then move on to include cell–cell interactions and the role of forces in monolayers and in collective cell migration, and introduce curvature at the transition from flat cell sheets to three-dimensional (3D) tissues. Fibrous 3D environments, as cells experience them in the body, introduce new mechanical boundary conditions and change cell behaviour compared to flat surfaces. Starting from early work on force transmission and collagen remodelling, we discuss recent discoveries on the interaction with geometric constraints and the resulting structure formation and network organization in 3D. Recent literature on two physiological scenarios—embryonic development and bone—is reviewed to demonstrate the role of the force-geometry balance in living organisms. Furthermore, the role of mechanics in pathological scenarios such as cancer is discussed. We conclude by highlighting common physical principles guiding cell mechanics, tissue patterning
Coverage-based constraints for IMRT optimization
Mescher, H.; Ulrich, S.; Bangert, M.
2017-09-01
Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.
Observational constraints on interstellar chemistry
International Nuclear Information System (INIS)
Winnewisser, G.
1984-01-01
The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)
Market segmentation using perceived constraints
Jinhee Jun; Gerard Kyle; Andrew Mowen
2008-01-01
We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...
Fixed Costs and Hours Constraints
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
An Introduction to 'Creativity Constraints'
DEFF Research Database (Denmark)
Onarheim, Balder; Biskjær, Michael Mose
2013-01-01
Constraints play a vital role as both restrainers and enablers in innovation processes by governing what the creative agent/s can and cannot do, and what the output can and cannot be. Notions of constraints are common in creativity research, but current contributions are highly dispersed due to n...
Constraint Programming for Context Comprehension
DEFF Research Database (Denmark)
Christiansen, Henning
2014-01-01
A close similarity is demonstrated between context comprehension, such as discourse analysis, and constraint programming. The constraint store takes the role of a growing knowledge base learned throughout the discourse, and a suitable con- straint solver does the job of incorporating new pieces...
Directory of Open Access Journals (Sweden)
K. Unnikrishnan
Full Text Available The main characteristics of night-time enhancements in TEC during magnetic storms are compared with those during quiet nights for different seasons and solar activity conditions at Palehua, a low latitude station during the period 1980–1989. We find that the mean amplitude has both a seasonal and solar activity dependence: in winter, the values are higher for weak storms as compared to those during quiet nights and increase with an increase in solar activity. In summer, the mean amplitude values during weak storms and quiet nights are almost equal. But during equinox, the mean amplitude values for quiet nights are greater than those during weak storms. The mean half-amplitude duration is higher during weak storms as compared to that during quiet nights in summer. However, during winter and equinox, the durations are almost equal for both quiet and weak storm nights. For the mean half-amplitude duration, the quiet night values for all the seasons and equinoctial weak storm values increase with an increase in solar activity. The occurrence frequency (in percent of TEC enhancement during weak storms is greater than during quiet nights for all seasons. The mean amplitude, the mean half-amplitude duration and the occurrence frequency (in percent of TEC enhancement values are higher during major storms as compared to those during quiet nights. The above parameters have their highest values during pre-midnight hours. From the data analysed, this behaviour is true in the case of major storms also.
Key words. Ionosphere (ionospheric disturbances; plasma convection Magnetospheric physics (storms and substorms
Directory of Open Access Journals (Sweden)
Arthur Tórgo Gómez
1998-04-01
Full Text Available Neste trabalho é apresentado um modelo que gera o seqüenciamento de partes e carregamento de ferramentas em um ambiente de manufatura flexível formado por uma máquina. São consideradas datas de vencimento das partes a serem processadas, os períodos dos turnos de produção, e uma restrição física de capacidade do magazine que armazena as ferramentas necessárias ao processamento das partes. No desenvolvimento do modelo são abordados os problemas de seleção de partes, de carregamento de ferramentas e o problema de "scheduling" com restrições. Um seqüenciamento inicial das partes e carregamento de ferramentas é obtido usando um algoritmo para identificação de grupos que considera a capacidade do magazine. A solução inicial é então melhorada pelo uso de busca tabu, gerando seqüências de partes e carregamento de ferramentas que refletem políticas de otimização determinadas por pesos em uma função objetivo. Vários testes foram realizados para validação do modelo, sendo aqui apresentados alguns resultados obtidos considerando os problemas de minimização do número de trocas de ferramentas, do número de instantes de parada para a troca de ferramentas, do tempo de atraso e do tempo referente ao período ocioso dos turnos de produção.In this paper we propose a model for scheduling parts and tools in a flexible manufacturing system composed of one machine. We consider the due dates of the parts to be processed, production periods and the physical capacity constraints of the magazine which stores the tools required for processing the parts. The model considers the problems of part selection, tool loading and part scheduling using a tabu search approach. An initial algorithm is proposed to identify clusters of parts and tools taking into consideration the magazine capacity. The tabu search optimization function indicates some possible optimization policies determined by weighted objective functions. Several tests were
Directory of Open Access Journals (Sweden)
J. Fabian Lopez
2010-01-01
Full Text Available We consider a Pickup and Delivery Vehicle Routing Problem (PDP commonly encountered in real-world logistics operations. The problem involves a set of practical complications that have received little attention in the vehicle routing literature. In this problem, there are multiple vehicle types available to cover a set of pickup and delivery requests, each of which has pickup time windows and delivery time windows. Transportation orders and vehicle types must satisfy a set of compatibility constraints that specify which orders cannot be covered by which vehicle types. In addition we include some dock service capacity constraints as is required on common real world operations. This problem requires to be attended on large scale instances (orders ≥ 500, (vehicles ≥ 150. As a generalization of the traveling salesman problem, clearly this problem is NP-hard. The exact algorithms are too slow for large scale instances. The PDP-TWDS is both a packing problem (assign order to vehicles, and a routing problem (find the best route for each vehicle. We propose to solve the problem in three stages. The first stage constructs initials solutions at aggregate level relaxing some constraints on the original problem. The other two stages imposes time windows and dock service constraints. Our results are favorable finding good quality solutions in relatively short computational times.
Vocabulary Constraint on Texts
Directory of Open Access Journals (Sweden)
C. Sutarsyah
2008-01-01
Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.Â It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.
Audit in the therapy professions: some constraints on progress.
Robinson, S
1996-12-01
To ascertain views about constraints on the progress of audit experienced by members of four of the therapy professions: physiotherapy, occupational therapy, speech and language therapy, and clinical psychology. Interviews in six health service sites with a history of audit in these professions. 62 interviews were held with members of the four professions and 60 with other personnel with relevant involvement. Five main themes emerged as the constraints on progress: resources; expertise; relations between groups; organisational structures; and overall planning of audit activities. Concerns about resources focused on lack of time, insufficient finance, and lack of access to appropriate systems of information technology. Insufficient expertise was identified as a major constraint on progress. Guidance on designing instruments for collection of data was the main concern, but help with writing proposals, specifying and keeping to objectives, analysing data, and writing reports was also required. Although sources of guidance were sometimes available, more commonly this was not the case. Several aspects of relations between groups were reported as constraining the progress of audit. These included support and commitment, choice of audit topics, conflicts between staff, willingness to participate and change practice, and concerns about confidentiality. Organisational structures which constrained audit included weak links between heads of professional services and managers of provider units, the inhibiting effect of change, the weakening of professional coherence when therapists were split across directorates, and the ethos of regarding audit findings as business secrets. Lack of an overall plan for audit meant that while some resources were available, others equally necessary for successful completion of projects were not. Members of four of the therapy professions identified a wide range of constraints on the progress of audit. If their commitment to audit is to be
Wave functions constructed from an invariant sum over histories satisfy constraints
International Nuclear Information System (INIS)
Halliwell, J.J.; Hartle, J.B.
1991-01-01
Invariance of classical equations of motion under a group parametrized by functions of time implies constraints between canonical coordinates and momenta. In the Dirac formulation of quantum mechanics, invariance is normally imposed by demanding that physical wave functions are annihilated by the operator versions of these constraints. In the sum-over-histories quantum mechanics, however, wave functions are specified, directly, by appropriate functional integrals. It therefore becomes an interesting question whether the wave functions so specified obey the operator constraints of the Dirac theory. In this paper, we show for a wide class of theories, including gauge theories, general relativity, and first-quantized string theories, that wave functions constructed from a sum over histories are, in fact, annihilated by the constraints provided that the sum over histories is constructed in a manner which respects the invariance generated by the constraints. By this we mean a sum over histories defined with an invariant action, invariant measure, and an invariant class of paths summed over
Constraint processing in our extensible language for cooperative imaging system
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
Sustainability constraints on UK bioenergy development
International Nuclear Information System (INIS)
Thornley, Patricia; Upham, Paul; Tomei, Julia
2009-01-01
Use of bioenergy as a renewable resource is increasing in many parts of the world and can generate significant environmental, economic and social benefits if managed with due regard to sustainability constraints. This work reviews the environmental, social and economic constraints on key feedstocks for UK heat, power and transport fuel. Key sustainability constraints include greenhouse gas savings achieved for different fuels, land availability, air quality impacts and facility siting. Applying those constraints, we estimate that existing technologies would facilitate a sustainability constrained level of medium-term bioenergy/biofuel supply to the UK of 4.9% of total energy demand, broken down into 4.3% of heat demands, 4.3% of electricity, and 5.8% of transport fuel. This suggests that attempts to increase the supply above these levels could have counterproductive sustainability impacts in the absence of compensating technology developments or identification of additional resources. The barriers that currently prevent this level of supply being achieved have been analysed and classified. This suggests that the biggest policy impacts would be in stimulating the market for heat demand in rural areas, supporting feedstock prices in a manner that incentivised efficient use/maximum greenhouse gas savings and targeting investment capital that improves yield and reduces land-take.
Integrable Hamiltonian systems and interactions through quadratic constraints
International Nuclear Information System (INIS)
Pohlmeyer, K.
1975-08-01
Osub(n)-invariant classical relativistic field theories in one time and one space dimension with interactions that are entirely due to quadratic constraints are shown to be closely related to integrable Hamiltonian systems. (orig.) [de
Cosmological constraints on interacting light particles
Energy Technology Data Exchange (ETDEWEB)
Brust, Christopher [Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo, ON, N2L 2Y5 Canada (Canada); Cui, Yanou [Department of Physics and Astronomy, University of California, 900 University Ave, Riverside, CA, 92521 (United States); Sigurdson, Kris, E-mail: cbrust@perimeterinstitute.ca, E-mail: yanou.cui@ucr.edu, E-mail: krs@phas.ubc.ca [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1 Canada (Canada)
2017-08-01
Cosmological observations are becoming increasingly sensitive to the effects of light particles in the form of dark radiation (DR) at the time of recombination. The conventional observable of effective neutrino number, N {sub eff}, is insufficient for probing generic, interacting models of DR. In this work, we perform likelihood analyses which allow both free-streaming effective neutrinos (parametrized by N {sub eff}) and interacting effective neutrinos (parametrized by N {sub fld}). We motivate an alternative parametrization of DR in terms of N {sub tot} (total effective number of neutrinos) and f {sub fs} (the fraction of effective neutrinos which are free-streaming), which is less degenerate than using N {sub eff} and N {sub fld}. Using the Planck 2015 likelihoods in conjunction with measurements of baryon acoustic oscillations (BAO), we find constraints on the total amount of beyond the Standard Model effective neutrinos (both free-streaming and interacting) of Δ N {sub tot} < 0.39 at 2σ. In addition, we consider the possibility that this scenario alleviates the tensions between early-time and late-time cosmological observations, in particular the measurements of σ{sub 8} (the amplitude of matter power fluctuations at 8 h {sup −1} Mpc), finding a mild preference for interactions among light species. We further forecast the sensitivities of a variety of future experiments, including Advanced ACTPol (a representative CMB Stage-III experiment), CMB Stage-IV, and the Euclid satellite. This study is relevant for probing non-standard neutrino physics as well as a wide variety of new particle physics models beyond the Standard Model that involve dark radiation.
Synthetic tsunami waveform catalogs with kinematic constraints
Baptista, Maria Ana; Miranda, Jorge Miguel; Matias, Luis; Omira, Rachid
2017-07-01
In this study we present a comprehensive methodology to produce a synthetic tsunami waveform catalogue in the northeast Atlantic, east of the Azores islands. The method uses a synthetic earthquake catalogue compatible with plate kinematic constraints of the area. We use it to assess the tsunami hazard from the transcurrent boundary located between Iberia and the Azores, whose western part is known as the Gloria Fault. This study focuses only on earthquake-generated tsunamis. Moreover, we assume that the time and space distribution of the seismic events is known. To do this, we compute a synthetic earthquake catalogue including all fault parameters needed to characterize the seafloor deformation covering the time span of 20 000 years, which we consider long enough to ensure the representability of earthquake generation on this segment of the plate boundary. The computed time and space rupture distributions are made compatible with global kinematic plate models. We use the tsunami empirical Green's functions to efficiently compute the synthetic tsunami waveforms for the dataset of coastal locations, thus providing the basis for tsunami impact characterization. We present the results in the form of offshore wave heights for all coastal points in the dataset. Our results focus on the northeast Atlantic basin, showing that earthquake-induced tsunamis in the transcurrent segment of the Azores-Gibraltar plate boundary pose a minor threat to coastal areas north of Portugal and beyond the Strait of Gibraltar. However, in Morocco, the Azores, and the Madeira islands, we can expect wave heights between 0.6 and 0.8 m, leading to precautionary evacuation of coastal areas. The advantages of the method are its easy application to other regions and the low computation effort needed.
Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa
To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.
Constraints on backreaction in dust universes
International Nuclear Information System (INIS)
Raesaenen, Syksy
2006-01-01
We study backreaction in dust universes using exact equations which do not rely on perturbation theory, concentrating on theoretical and observational constraints. In particular, we discuss the recent suggestion (Kolb et al 2005 Preprint hep-th/0503117) that superhorizon perturbations could explain present-day accelerated expansion as a useful example which can be ruled out. We note that a backreaction explanation of late-time acceleration will have to involve spatial curvature and subhorizon perturbations
Embedded System Synthesis under Memory Constraints
DEFF Research Database (Denmark)
Madsen, Jan; Bjørn-Jørgensen, Peter
1999-01-01
This paper presents a genetic algorithm to solve the system synthesis problem of mapping a time constrained single-rate system specification onto a given heterogeneous architecture which may contain irregular interconnection structures. The synthesis is performed under memory constraints, that is......, the algorithm takes into account the memory size of processors and the size of interface buffers of communication links, and in particular the complicated interplay of these. The presented algorithm is implemented as part of the LY-COS cosynthesis system....
Accuracy Constraint Determination in Fixed-Point System Design
Directory of Open Access Journals (Sweden)
Serizel R
2008-01-01
Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.
Natural Constraints to Species Diversification.
Directory of Open Access Journals (Sweden)
Eric Lewitus
2016-08-01
Full Text Available Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the
Natural Constraints to Species Diversification.
Lewitus, Eric; Morlon, Hélène
2016-08-01
Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of
Natural Constraints to Species Diversification
Lewitus, Eric; Morlon, Hélène
2016-01-01
Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of
Current constraints on the cosmic growth history
International Nuclear Information System (INIS)
Bean, Rachel; Tangmatitham, Matipon
2010-01-01
We present constraints on the cosmic growth history with recent cosmological data, allowing for deviations from ΛCDM as might arise if cosmic acceleration is due to modifications to general relativity or inhomogeneous dark energy. We combine measures of the cosmic expansion history, from Type 1a supernovae, baryon acoustic oscillations, and the cosmic microwave background (CMB), with constraints on the growth of structure from recent galaxy, CMB, and weak lensing surveys along with integated Sachs Wolfe-galaxy cross correlations. Deviations from ΛCDM are parameterized by phenomenological modifications to the Poisson equation and the relationship between the two Newtonian potentials. We find modifications that are present at the time the CMB is formed are tightly constrained through their impact on the well-measured CMB acoustic peaks. By contrast, constraints on late-time modifications to the growth history, as might arise if modifications are related to the onset of cosmic acceleration, are far weaker, but remain consistent with ΛCDM at the 95% confidence level. For these late-time modifications we find that differences in the evolution on large and small scales could provide an interesting signature by which to search for modified growth histories with future wide angular coverage, large scale structure surveys.
Rydell, Patrick J.; Mirenda, Pat
1991-01-01
This study of 3 boys (ages 5-6) with autism found that adult high-constraint antecedent utterances elicited more verbal utterances in general, including subjects' echolalia; adult low-constraint utterances elicited more subject high-constraint utterances; and the degree of adult-utterance constraint did not influence the mean lengths of subjects'…
Constraints on fermion mixing with exotics
International Nuclear Information System (INIS)
Nardi, E.; Tommasini, D.
1991-11-01
We analyze the constraints on the mixing angles of the standard fermions with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets), that appear in many extensions of the electroweak theory. The updated Charged Current and Neutral Current experimental data, including also the recent Z-peak measurements, are considered. The results of the global analysis of all these data are then presented
Machine tongues. X. Constraint languages
Energy Technology Data Exchange (ETDEWEB)
Levitt, D.
Constraint languages and programming environments will help the designer produce a lucid description of a problem domain, and then of particular situations and problems in it. Early versions of these languages were given descriptions of real world domain constraints, like the operation of electrical and mechanical parts. More recently, the author has automated a vocabulary for describing musical jazz phrases, using constraint language as a jazz improviser. General constraint languages will handle all of these domains. Once the model is in place, the system will connect built-in code fragments and algorithms to answer questions about situations; that is, to help solve problems. Bugs will surface not in code, but in designs themselves. 15 references.
Fluid convection, constraint and causation
Bishop, Robert C.
2012-01-01
Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955
On Lifecycle Constraints of Artifact-Centric Workflows
Kucukoguz, Esra; Su, Jianwen
Data plays a fundamental role in modeling and management of business processes and workflows. Among the recent "data-aware" workflow models, artifact-centric models are particularly interesting. (Business) artifacts are the key data entities that are used in workflows and can reflect both the business logic and the execution states of a running workflow. The notion of artifacts succinctly captures the fluidity aspect of data during workflow executions. However, much of the technical dimension concerning artifacts in workflows is not well understood. In this paper, we study a key concept of an artifact "lifecycle". In particular, we allow declarative specifications/constraints of artifact lifecycle in the spirit of DecSerFlow, and formulate the notion of lifecycle as the set of all possible paths an artifact can navigate through. We investigate two technical problems: (Compliance) does a given workflow (schema) contain only lifecycle allowed by a constraint? And (automated construction) from a given lifecycle specification (constraint), is it possible to construct a "compliant" workflow? The study is based on a new formal variant of artifact-centric workflow model called "ArtiNets" and two classes of lifecycle constraints named "regular" and "counting" constraints. We present a range of technical results concerning compliance and automated construction, including: (1) compliance is decidable when workflow is atomic or constraints are regular, (2) for each constraint, we can always construct a workflow that satisfies the constraint, and (3) sufficient conditions where atomic workflows can be constructed.
Pairwise Constraint-Guided Sparse Learning for Feature Selection.
Liu, Mingxia; Zhang, Daoqiang
2016-01-01
Feature selection aims to identify the most informative features for a compact and accurate data representation. As typical supervised feature selection methods, Lasso and its variants using L1-norm-based regularization terms have received much attention in recent studies, most of which use class labels as supervised information. Besides class labels, there are other types of supervised information, e.g., pairwise constraints that specify whether a pair of data samples belong to the same class (must-link constraint) or different classes (cannot-link constraint). However, most of existing L1-norm-based sparse learning methods do not take advantage of the pairwise constraints that provide us weak and more general supervised information. For addressing that problem, we propose a pairwise constraint-guided sparse (CGS) learning method for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including: 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. We conduct a series of experiments on a number of data sets from University of California-Irvine machine learning repository, a gene expression data set, two real-world neuroimaging-based classification tasks, and two large-scale attribute classification tasks. Experimental results demonstrate the efficacy of our proposed methods, compared with several established feature selection methods.
Time management displays for shuttle countdown
Beller, Arthur E.; Hadaller, H. Greg; Ricci, Mark J.
1992-01-01
The Intelligent Launch Decision Support System project is developing a Time Management System (TMS) for the NASA Test Director (NTD) to use for time management during Shuttle terminal countdown. TMS is being developed in three phases: an information phase; a tool phase; and an advisor phase. The information phase is an integrated display (TMID) of firing room clocks, of graphic timelines with Ground Launch Sequencer events, and of constraints. The tool phase is a what-if spreadsheet (TMWI) for devising plans for resuming from unplanned hold situations. It is tied to information in TMID, propagates constraints forward and backward to complete unspecified values, and checks the plan against constraints. The advisor phase is a situation advisor (TMSA), which proactively suggests tactics. A concept prototype for TMSA is under development. The TMID is currently undergoing field testing. Displays for TMID and TMWI are described. Descriptions include organization, rationale for organization, implementation choices and constraints, and use by NTD.
Weisbin, Charles R.; Clark, Pamela; Elfes, Alberto; Smith, Jeffrey H.; Mrozinski, Joseph; Adumitroaie, Virgil; Hua, Hook; Shelton, Kacie; Lincoln, William; Silberg, Robert
2010-01-01
Virtually every NASA space-exploration mission represents a compromise between the interests of two expert, dedicated, but very different communities: scientists, who want to go quickly to the places that interest them most and spend as much time there as possible conducting sophisticated experiments, and the engineers and designers charged with maximizing the probability that a given mission will be successful and cost-effective. Recent work at NASA's Jet Propulsion Laboratory (JPL) seeks to enhance communication between these two groups, and to help them reconcile their interests, by developing advanced modeling capabilities with which they can analyze the achievement of science goals and objectives against engineering design and operational constraints. The analyses conducted prior to this study have been point-design driven. Each analysis has been of one hypothetical case which addresses the question: Given a set of constraints, how much science can be done? But the constraints imposed by the architecture team-e.g., rover speed, time allowed for extravehicular activity (EVA), number of sites at which science experiments are to be conducted- are all in early development and carry a great deal of uncertainty. Variations can be incorporated into the analysis, and indeed that has been done in sensitivity studies designed to see which constraint variations have the greatest impact on results. But if a very large number of variations can be analyzed all at once, producing a table that includes virtually the entire trade space under consideration, then we have a tool that enables scientists and mission architects to ask the inverse question: For a given desired level of science (or any other objective), what is the range of constraints that would be needed? With this tool, mission architects could determine, for example, what combinations of rover speed, EVA duration, and other constraints produce the desired results. Further, this tool would help them identify which
Planck intermediate results. XXIV. Constraints on variation of fundamental constants
Ade, P A R; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Battaner, E.; Benabed, K.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Burigana, C.; Butler, R.C.; Calabrese, E.; Chamballu, A.; Chiang, H.C.; Christensen, P.R.; Clements, D.L.; Colombo, L.P.L.; Couchot, F.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J.M.; Dole, H.; Dore, O.; Dupac, X.; Ensslin, T.A.; Eriksen, H.K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gregorio, A.; Gruppuso, A.; Hansen, F.K.; Hanson, D.; Harrison, D.L.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Jaffe, A.H.; Jones, W.C.; Keihanen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.M.; Lasenby, A.; Lawrence, C.R.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Mandolesi, N.; Maris, M.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Munshi, D.; Murphy, J.A.; Naselsky, P.; Nati, F.; Natoli, P.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G.W.; Prunet, S.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L.D.; Stolyarov, V.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Uzan, J.P.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L.A.; Yvon, D.; Zacchei, A.; Zonca, A.
2015-01-01
Any variation of the fundamental physical constants, and more particularly of the fine structure constant, $\\alpha$, or of the mass of the electron, $m_e$, would affect the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data allow one to improve the constraint on the time variation of the fine structure constant at redshift $z\\sim 10^3$ by about a factor of 5 compared to WMAP data, as well as to break the degeneracy with the Hubble constant, $H_0$. In addition to $\\alpha$, we can set a constraint on the variation of the mass of the electron, $m_{\\rm e}$, and on the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other cosmological probes. We conclude that independent time variations of the fine structu...
Thermal Constraints in Diving.
1981-04-01
other, you can see that the time the tirst one lasts is 5W minutes whereas that of the second is 10 min- kites . The point is that the tirst diver...which Suki Hong and some of his colleagues have found in runners and nonrunners in Hawaii. It may be related to a peripheral vascular response of some
Salloum, Ahmed
Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models
Simplification of integrity constraints with aggregates and arithmetic built-ins
DEFF Research Database (Denmark)
Martinenghi, Davide
2004-01-01
Both aggregates and arithmetic built-ins are widely used in current database query languages: Aggregates are second-order constructs such as CNT and SUM of SQL; arithmetic built-ins include relational and other mathematical operators that apply to numbers, such as < and +. These features are also...... time, simplified versions of such integrity constraints that can be tested before the execution of any update. In this way, virtually no time is spent for optimization or rollbacks at run time. Both set and bag semantics are considered....... of interest in the context of database integrity constraints: correct and efficient integrity checking is crucial, as, without any guarantee of data consistency, the answers to queries cannot be trusted. In this paper we propose a method of practical relevance that can be used to derive, at database design...
Cosmological constraints with clustering-based redshifts
Kovetz, Ely D.; Raccanelli, Alvise; Rahman, Mubdi
2017-07-01
We demonstrate that observations lacking reliable redshift information, such as photometric and radio continuum surveys, can produce robust measurements of cosmological parameters when empowered by clustering-based redshift estimation. This method infers the redshift distribution based on the spatial clustering of sources, using cross-correlation with a reference data set with known redshifts. Applying this method to the existing Sloan Digital Sky Survey (SDSS) photometric galaxies, and projecting to future radio continuum surveys, we show that sources can be efficiently divided into several redshift bins, increasing their ability to constrain cosmological parameters. We forecast constraints on the dark-energy equation of state and on local non-Gaussianity parameters. We explore several pertinent issues, including the trade-off between including more sources and minimizing the overlap between bins, the shot-noise limitations on binning and the predicted performance of the method at high redshifts, and most importantly pay special attention to possible degeneracies with the galaxy bias. Remarkably, we find that once this technique is implemented, constraints on dynamical dark energy from the SDSS imaging catalogue can be competitive with, or better than, those from the spectroscopic BOSS survey and even future planned experiments. Further, constraints on primordial non-Gaussianity from future large-sky radio-continuum surveys can outperform those from the Planck cosmic microwave background experiment and rival those from future spectroscopic galaxy surveys. The application of this method thus holds tremendous promise for cosmology.
Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints
Directory of Open Access Journals (Sweden)
Jun Zhang
2017-06-01
Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.
Neutron star cooling constraints for color superconductivity in hybrid stars
International Nuclear Information System (INIS)
Popov, S.; Grigoryan, Kh.; Blaschke, D.
2005-01-01
We apply the recently developed LogN-LogS test of compact star cooling theories for the first time to hybrid stars with a color superconducting quark matter core. While there is not yet a microscopically founded superconducting quark matter phase which would fulfill constraints from cooling phenomenology, we explore the hypothetical 2SC+X phase and show that the magnitude and density-dependence of the X-gap can be chosen to satisfy a set of tests: temperature-age (T-t), the brightness constraint, LogN-LogS, and the mass spectrum constraint. The latter test appears as a new conjecture from the present investigation
Einstein constraints in the Yang-Mills form
International Nuclear Information System (INIS)
Ashtekar, A.
1987-01-01
It is pointed out that constraints of Einstein's theory play a powerful role in both classical and quantum theory because they generate motions in spacetime, rather than in an internal space. New variables are then introduced on the Einstein phase space in terms of which constraints simplify considerably. In particular, the use of these variables enables one to imbed the constraint surface of Einstein's theory into that of Yang-Mills. The imbedding suggests new lines of attack to a number of problems in classical and quantum gravity and provides new concepts and tools to investigate the microscopic structure of space-time geometry
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
DEFF Research Database (Denmark)
Karsten, Christian Vad; Pisinger, David; Røpke, Stefan
2015-01-01
-commodity network flow problem with transit time constraints which puts limits on the duration of the transit of the commodities through the network. It is shown that for the particular application it does not increase the solution time to include the transit time constraints and that including the transit time...... is essential to offer customers a competitive product. © 2015 Elsevier Ltd. All rights reserved....
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.
Data assimilation with inequality constraints
Thacker, W. C.
If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.
About some types of constraints in problems of routing
Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.
2016-12-01
Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.
Energy Technology Data Exchange (ETDEWEB)
Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp [Department of Chemistry, Graduate School of Science, Nagoya University, Chikusa, Nagoya 464-8602 (Japan); Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Chikusa, Nagoya 464-8602 (Japan)
2016-09-07
Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.
Mars Rover Sample Return aerocapture configuration design and packaging constraints
Lawson, Shelby J.
1989-01-01
This paper discusses the aerodynamics requirements, volume and mass constraints that lead to a biconic aeroshell vehicle design that protects the Mars Rover Sample Return (MRSR) mission elements from launch to Mars landing. The aerodynamic requirements for Mars aerocapture and entry and packaging constraints for the MRSR elements result in a symmetric biconic aeroshell that develops a L/D of 1.0 at 27.0 deg angle of attack. A significant problem in the study is obtaining a cg that provides adequate aerodynamic stability and performance within the mission imposed constraints. Packaging methods that relieve the cg problems include forward placement of aeroshell propellant tanks and incorporating aeroshell structure as lander structure. The MRSR missions developed during the pre-phase A study are discussed with dimensional and mass data included. Further study is needed for some missions to minimize MRSR element volume so that launch mass constraints can be met.
Constraint programming and decision making
Kreinovich, Vladik
2014-01-01
In many application areas, it is necessary to make effective decisions under constraints. Several area-specific techniques are known for such decision problems; however, because these techniques are area-specific, it is not easy to apply each technique to other applications areas. Cross-fertilization between different application areas is one of the main objectives of the annual International Workshops on Constraint Programming and Decision Making. Those workshops, held in the US (El Paso, Texas), in Europe (Lyon, France), and in Asia (Novosibirsk, Russia), from 2008 to 2012, have attracted researchers and practitioners from all over the world. This volume presents extended versions of selected papers from those workshops. These papers deal with all stages of decision making under constraints: (1) formulating the problem of multi-criteria decision making in precise terms, (2) determining when the corresponding decision problem is algorithmically solvable; (3) finding the corresponding algorithms, and making...
Reasoning about Strategies under Partial Observability and Fairness Constraints
Directory of Open Access Journals (Sweden)
Simon Busard
2013-03-01
Full Text Available A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it.
Tail Risk Constraints and Maximum Entropy
Directory of Open Access Journals (Sweden)
Donald Geman
2015-06-01
Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.
Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint
Khoshsokhan, S.; Rajabi, R.; Zayyani, H.
2017-09-01
Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.
Constraints on grand unified superstring theories
International Nuclear Information System (INIS)
Ellis, J.; Lopez, J.L.; Nanopoulos, D.V.; Houston Advanced Research Center
1990-01-01
We evaluate some constraints on the construction of grand unified superstring theories (GUSTs) using higher level Kac-Moody algebras on the world-sheet. In the most general formulation of the heterotic string in four dimensions, an analysis of the basic GUST model-building constraints, including a realistic hidden gauge group, reveals that there are no E 6 models and any SO(10) models can only exist at level-5. Also, any such SU(5) models can exist only for levels 4≤k≤19. These SO(10) and SU(5) models risk having many large, massless, phenomenologically troublesome representations. We also show that with a suitable hidden sector gauge group, it is possible to avoid free light fractionally charged particles, which are endemic to string derived models. We list all such groups and their representations for the flipped SU(5)xU(1) model. We conclude that a sufficiently binding hidden sector gauge group becomes a basic model-building constraint. (orig.)
Goodenough, Anne E.; Hart, Adam G.; Elliot, Simon L.
2011-01-01
Phenological studies have demonstrated changes in the timing of seasonal events across multiple taxonomic groups as the climate warms. Some northern European migrant bird populations, however, show little or no significant change in breeding phenology, resulting in synchrony with key food sources becoming mismatched. This phenological inertia has often been ascribed to migration constraints (i.e. arrival date at breeding grounds preventing earlier laying). This has been based primarily on research in The Netherlands and Germany where time between arrival and breeding is short (often as few as 9 days). Here, we test the arrival constraint hypothesis over a 15-year period for a U.K. pied flycatcher ( Ficedula hypoleuca) population where laying date is not constrained by arrival as the period between arrival and breeding is substantial and consistent (average 27 ± 4.57 days SD). Despite increasing spring temperatures and quantifiably stronger selection for early laying on the basis of number of offspring to fledge, we found no significant change in breeding phenology, in contrast with co-occurring resident blue tits ( Cyanistes caeruleus). We discuss possible non-migratory constraints on phenological adjustment, including limitations on plasticity, genetic constraints and competition, as well as the possibility of counter-selection pressures relating to adult survival, longevity or future reproductive success. We propose that such factors need to be considered in conjunction with the arrival constraint hypothesis.
Goodenough, Anne E; Hart, Adam G; Elliot, Simon L
2011-01-01
Phenological studies have demonstrated changes in the timing of seasonal events across multiple taxonomic groups as the climate warms. Some northern European migrant bird populations, however, show little or no significant change in breeding phenology, resulting in synchrony with key food sources becoming mismatched. This phenological inertia has often been ascribed to migration constraints (i.e. arrival date at breeding grounds preventing earlier laying). This has been based primarily on research in The Netherlands and Germany where time between arrival and breeding is short (often as few as 9 days). Here, we test the arrival constraint hypothesis over a 15-year period for a U.K. pied flycatcher (Ficedula hypoleuca) population where laying date is not constrained by arrival as the period between arrival and breeding is substantial and consistent (average 27 ± 4.57 days SD). Despite increasing spring temperatures and quantifiably stronger selection for early laying on the basis of number of offspring to fledge, we found no significant change in breeding phenology, in contrast with co-occurring resident blue tits (Cyanistes caeruleus). We discuss possible non-migratory constraints on phenological adjustment, including limitations on plasticity, genetic constraints and competition, as well as the possibility of counter-selection pressures relating to adult survival, longevity or future reproductive success. We propose that such factors need to be considered in conjunction with the arrival constraint hypothesis.
Automated Derivation of Complex System Constraints from User Requirements
Foshee, Mark; Murey, Kim; Marsh, Angela
2010-01-01
The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.
Butler, J. P.; Jamieson, R. A.; Dunning, G. R.; Pecha, M. E.; Robinson, P.; Steenkamp, H. M.
2018-06-01
We present the results of a combined CA-ID-TIMS and LA-MC-ICP-MS U-Pb geochronology study of zircon and associated rutile and titanite from the Nordøyane ultra-high-pressure (UHP) domain in the Western Gneiss Region (WGR) of Norway. The dated samples include 4 eclogite bodies, 2 host-rock migmatites, and 2 cross-cutting pegmatites and leucosomes, all from the island of Harøya. Zircon from a coesite eclogite yielded an age of ca. 413 Ma, interpreted as the time of UHP metamorphism in this sample. Zircon data from the other eclogite bodies yielded metamorphic ages of ca. 413 Ma, 407 Ma, and 406 Ma; zircon trace-element data associated with 413 Ma and 407 Ma ages are consistent with eclogite-facies crystallization. In all of the eclogites, U-Pb dates from zircon cores, interpreted as the times of protolith crystallization, range from ca. 1680-1586 Ma, consistent with Gothian ages from orthogneisses in Nordøyane and elsewhere in the WGR. A zircon core age of ca. 943 Ma from one sample agrees with Sveconorwegian ages of felsic gneisses and pegmatites in the western part of the area. Migmatites hosting the eclogite bodies yielded zircon core ages of ca. 1657-1591 Ma and rim ages of ca. 395-392 Ma, interpreted as the times of Gothian protolith formation and Scandian partial melt crystallization, respectively. Pegmatite in an eclogite boudin neck yielded a crystallization age of ca. 388 Ma, interpreted as the time of melt crystallization. Rutile and titanite from 3 samples (an eclogite and two migmatites) yielded concordant ID-TIMS ages of 378-376 Ma. The results are similar to existing U-Pb data from other Nordøyane eclogites (415-405 Ma). In combination with previous pressure-temperature data from the coesite eclogite, these ages indicate that peak metamorphic conditions of 3 GPa/760 °C were reached ca. 413 Ma, followed by decompression to 1 GPa/810 °C by ca. 397 Ma and cooling below ca. 600 °C by ca. 375 Ma. The results are compatible with protracted UHP
Constraint elimination in dynamical systems
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
Constraint Programming versus Mathematical Programming
DEFF Research Database (Denmark)
Hansen, Jesper
2003-01-01
Constraint Logic Programming (CLP) is a relatively new technique from the 80's with origins in Computer Science and Artificial Intelligence. Lately, much research have been focused on ways of using CLP within the paradigm of Operations Research (OR) and vice versa. The purpose of this paper...
Sterile neutrino constraints from cosmology
DEFF Research Database (Denmark)
Hamann, Jan; Hannestad, Steen; Raffelt, Georg G.
2012-01-01
The presence of light particles beyond the standard model's three neutrino species can profoundly impact the physics of decoupling and primordial nucleosynthesis. I review the observational signatures of extra light species, present constraints from recent data, and discuss the implications of po...... of possible sterile neutrinos with O(eV)-masses for cosmology....
Intertemporal consumption and credit constraints
DEFF Research Database (Denmark)
Leth-Petersen, Søren
2010-01-01
There is continuing controversy over the importance of credit constraints. This paper investigates whether total household expenditure and debt is affected by an exogenous increase in access to credit provided by a credit market reform that enabled Danish house owners to use housing equity...
Financial Constraints: Explaining Your Position.
Cargill, Jennifer
1988-01-01
Discusses the importance of educating library patrons about the library's finances and the impact of budget constraints and the escalating cost of serials on materials acquisition. Steps that can be taken in educating patrons by interpreting and publicizing financial information are suggested. (MES)
Energy Technology Data Exchange (ETDEWEB)
Alexander, K. D.; Berger, E.; Fong, W.; Williams, P. K. G.; Guidorzi, C.; Margutti, R.; Metzger, B. D.; Annis, J.; Blanchard, P. K.; Brout, D.; Brown, D. A.; Chen, H. -Y.; Chornock, R.; Cowperthwaite, P. S.; Drout, M.; Eftekhari, T.; Frieman, J.; Holz, D. E.; Nicholl, M.; Rest, A.; Sako, M.; Soares-Santos, M.; Villar, V. A.
2017-10-16
We present Very Large Array (VLA) and Atacama Large Millimeter/sub-millimeter Array ALMA radio observations of GW\\,170817, the first Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo gravitational wave (GW) event from a binary neutron star merger and the first GW event with an electromagnetic (EM) counterpart. Our data include the first observations following the discovery of the optical transient at both the centimeter ($13.7$ hours post merger) and millimeter ($2.41$ days post merger) bands. We detect faint emission at 6 GHz at 19.47 and 39.23 days after the merger, but not in an earlier observation at 2.46 d. We do not detect cm/mm emission at the position of the optical counterpart at frequencies of 10-97.5 GHz at times ranging from 0.6 to 30 days post merger, ruling out an on-axis short gamma-ray burst (SGRB) for energies $\\gtrsim 10^{48}$ erg. For fiducial SGRB parameters, our limits require an observer viewer angle of $\\gtrsim 20^{\\circ}$. The radio and X-ray data can be jointly explained as the afterglow emission from an SGRB with a jet energy of $\\sim 10^{49}-10^{50}$ erg that exploded in a uniform density environment with $n\\sim 10^{-4}-10^{-2}$ cm$^{-3}$, viewed at an angle of $\\sim 20^{\\circ}-40^{\\circ}$ from the jet axis. Using the results of our light curve and spectral modeling, in conjunction with the inference of the circumbinary density, we predict the emergence of late-time radio emission from the deceleration of the kilonova (KN) ejecta on a timescale of $\\sim 5-10$ years that will remain detectable for decades with next-generation radio facilities, making GW\\,170817 a compelling target for long-term radio monitoring.
New seismograph includes filters
Energy Technology Data Exchange (ETDEWEB)
1979-11-02
The new Nimbus ES-1210 multichannel signal enhancement seismograph from EG and G geometrics has recently been redesigned to include multimode signal fillers on each amplifier. The ES-1210F is a shallow exploration seismograph for near subsurface exploration such as in depth-to-bedrock, geological hazard location, mineral exploration, and landslide investigations.
Simplicity constraints: A 3D toy model for loop quantum gravity
Charles, Christoph
2018-05-01
In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.
Navigating contextual constraints in discourse: Design explications in institutional talk
Herijgers, MLC (Marloes); Maat, HLW (Henk) Pander
2017-01-01
Although institutional discourse is subject to a vast ensemble of constraints, its design is not fixed beforehand. On the contrary, optimizing the satisfaction of these constraints requires considerable discourse design skills from institutional agents. In this article, we analyze how Dutch banks’ mortgage advisors navigate their way through the consultations context. We focus on what we call discourse design explications, that is, stretches of talk in which participants refer to conflicting constraints in the discourse context, at the same time proposing particular discourse designs for dealing with these conflicts. We start by discussing three forms of design explication. Then we will examine the various resolutions they propose for constraint conflicts and show how advisors seek customer consent or cooperation for the proposed designs. Thus our analysis reveals how institutional agents, while providing services, work on demonstrating how the design of these services is optimized and tailored to customers. PMID:28781580
Progressive IRP Models for Power Resources Including EPP
Directory of Open Access Journals (Sweden)
Yiping Zhu
2017-01-01
Full Text Available In the view of optimizing regional power supply and demand, the paper makes effective planning scheduling of supply and demand side resources including energy efficiency power plant (EPP, to achieve the target of benefit, cost, and environmental constraints. In order to highlight the characteristics of different supply and demand resources in economic, environmental, and carbon constraints, three planning models with progressive constraints are constructed. Results of three models by the same example show that the best solutions to different models are different. The planning model including EPP has obvious advantages considering pollutant and carbon emission constraints, which confirms the advantages of low cost and emissions of EPP. The construction of progressive IRP models for power resources considering EPP has a certain reference value for guiding the planning and layout of EPP within other power resources and achieving cost and environmental objectives.
Caporaso, George J.; Sampayan, Stephen E.; Kirbie, Hugh C.
1998-01-01
A dielectric-wall linear accelerator is improved by a high-voltage, fast rise-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface.
Constraints on amplitudes of curvature perturbations from primordial black holes
International Nuclear Information System (INIS)
Bugaev, Edgar; Klimai, Peter
2009-01-01
We calculate the primordial black hole (PBH) mass spectrum produced from a collapse of the primordial density fluctuations in the early Universe using, as an input, several theoretical models giving the curvature perturbation power spectra P R (k) with large (∼10 -2 -10 -1 ) values at some scale of comoving wave numbers k. In the calculation we take into account the explicit dependence of gravitational (Bardeen) potential on time. Using the PBH mass spectra, we further calculate the neutrino and photon energy spectra in extragalactic space from evaporation of light PBHs, and the energy density fraction contained in PBHs today (for heavier PBHs). We obtain the constraints on the model parameters using available experimental data (including data on neutrino and photon cosmic backgrounds). We briefly discuss the possibility that the observed 511 keV line from the Galactic center is produced by annihilation of positrons evaporated by PBHs.
Vandenberg Air Force Base Upper Level Wind Launch Weather Constraints
Shafer, Jaclyn A.; Wheeler, Mark M.
2012-01-01
The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman III ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The maximum wind speed and 1000-ft shear values for each sounding in each subseason were determined. To accurately calculate the PoV, the AMU determined the theoretical distributions that best fit the maximum wind speed and maximum shear datasets. Ultimately it was discovered that the maximum wind speeds follow a Gaussian distribution while the maximum shear values follow a lognormal distribution. These results were applied when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition to the requirements outlined in the original task plan, the AMU also included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on day of launch. The interactive graphical user interface (GUI) for this project was developed in
Constraints on the braneworld from compact stars
Energy Technology Data Exchange (ETDEWEB)
Felipe, R.G. [Instituto Politecnico de Lisboa, ISEL, Instituto Superior de Engenharia de Lisboa, Lisboa (Portugal); Instituto Superior Tecnico, Universidade de Lisboa, Departamento de Fisica, Centro de Fisica Teorica de Particulas, CFTP, Lisboa (Portugal); Paret, D.M. [Universidad de la Habana, Departamento de Fisica General, Facultad de Fisica, La Habana (Cuba); Martinez, A.P. [Instituto de Cibernetica, Matematica y Fisica (ICIMAF), La Habana (Cuba); Universidad Nacional Autonoma de Mexico, Instituto de Ciencias Nucleares, Mexico, Distrito Federal (Mexico)
2016-06-15
According to the braneworld idea, ordinary matter is confined on a three-dimensional space (brane) that is embedded in a higher-dimensional space-time where gravity propagates. In this work, after reviewing the limits coming from general relativity, finiteness of pressure and causality on the brane, we derive observational constraints on the braneworld parameters from the existence of stable compact stars. The analysis is carried out by solving numerically the brane-modified Tolman-Oppenheimer-Volkoff equations, using different representative equations of state to describe matter in the star interior. The cases of normal dense matter, pure quark matter and hybrid matter are considered. (orig.)
Intelligence Constraints on Terrorist Network Plots
Woo, Gordon
Since 9/11, the western intelligence and law enforcement services have managed to interdict the great majority of planned attacks against their home countries. Network analysis shows that there are important intelligence constraints on the number and complexity of terrorist plots. If two many terrorists are involved in plots at a given time, a tipping point is reached whereby it becomes progressively easier for the dots to be joined and for the conspirators to be arrested, and for the aggregate evidence to secure convictions. Implications of this analysis are presented for the campaign to win hearts and minds.
Constraints on Gauge Field Production during Inflation
DEFF Research Database (Denmark)
Nurmi, Sami; Sloth, Martin Snoager
2014-01-01
In order to gain new insights into the gauge field couplings in the early universe, we consider the constraints on gauge field production during inflation imposed by requiring that their effect on the CMB anisotropies are subdominant. In particular, we calculate systematically the bispectrum...... of the primordial curvature perturbation induced by the presence of vector gauge fields during inflation. Using a model independent parametrization in terms of magnetic non-linearity parameters, we calculate for the first time the contribution to the bispectrum from the cross correlation between the inflaton...
Modeling Network Transition Constraints with Hypergraphs
DEFF Research Database (Denmark)
Harrod, Steven
2011-01-01
Discrete time dynamic graphs are frequently used to model multicommodity flows or activity paths through constrained resources, but simple graphs fail to capture the interaction effects of resource transitions. The resulting schedules are not operationally feasible, and return inflated objective...... values. A directed hypergraph formulation is derived to address railway network sequencing constraints, and an experimental problem sample solved to estimate the magnitude of objective inflation when interaction effects are ignored. The model is used to demonstrate the value of advance scheduling...... of train paths on a busy North American railway....
Analytic device including nanostructures
Di Fabrizio, Enzo M.; Fratalocchi, Andrea; Totero Gongora, Juan Sebastian; Coluccio, Maria Laura; Candeloro, Patrizio; Cuda, Gianni
2015-01-01
A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.
Constraints and stability in vector theories with spontaneous Lorentz violation
International Nuclear Information System (INIS)
Bluhm, Robert; Gagne, Nolan L.; Potting, Robertus; Vrublevskis, Arturs
2008-01-01
Vector theories with spontaneous Lorentz violation, known as bumblebee models, are examined in flat spacetime using a Hamiltonian constraint analysis. In some of these models, Nambu-Goldstone modes appear with properties similar to photons in electromagnetism. However, depending on the form of the theory, additional modes and constraints can appear that have no counterparts in electromagnetism. An examination of these constraints and additional degrees of freedom, including their nonlinear effects, is made for a variety of models with different kinetic and potential terms, and the results are compared with electromagnetism. The Hamiltonian constraint analysis also permits an investigation of the stability of these models. For certain bumblebee theories with a timelike vector, suitable restrictions of the initial-value solutions are identified that yield ghost-free models with a positive Hamiltonian. In each case, the restricted phase space is found to match that of electromagnetism in a nonlinear gauge
Saskatchewan resources. [including uranium
Energy Technology Data Exchange (ETDEWEB)
1979-09-01
The production of chemicals and minerals for the chemical industry in Saskatchewan are featured, with some discussion of resource taxation. The commodities mentioned include potash, fatty amines, uranium, heavy oil, sodium sulfate, chlorine, sodium hydroxide, sodium chlorate and bentonite. Following the successful outcome of the Cluff Lake inquiry, the uranium industry is booming. Some developments and production figures for Gulf Minerals, Amok, Cenex and Eldorado are mentioned.
Directory of Open Access Journals (Sweden)
T. S. Kasatkina
2015-01-01
Full Text Available Terminal control problem with fixed finite time for the second order affine systems with state constraints is considered. A solution of such terminal problem is suggested for the systems with scalar control of regular canonical form.In this article it is shown that the initial terminal problem is equivalent to the problem of auxiliary function search. This function should satisfy some conditions. Such function design consists of two stages. The first stage includes search of function which corresponds the solution of the terminal control problem without state constraints. This function is designed as polynom of the fifth power which depends on time variable. Coefficients of the polynom are defined by boundary conditions. The second stage includes modification of designed function if corresponding to that function trajectory is not satisfied constraints. Modification process is realized by adding to the current function supplementary polynom. Influence of that polynom handles by variation of a parameter value. Modification process can include a few iterations. After process termination continuous control is found. This control is the solution of the initial terminal prUsing presented scheme the terminal control problem for system, which describes oscillations of the mathematical pendulum, is solved. This approach can be used for the solution of terminal control problems with state constraints for affine systems with multi-dimensional control.
Creativity from Constraints in Engineering Design
DEFF Research Database (Denmark)
Onarheim, Balder
2012-01-01
This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...
Directory of Open Access Journals (Sweden)
Teri Lawton
2017-05-01
Full Text Available The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK. The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement
DEFF Research Database (Denmark)
Korzenevica, Marina
2016-01-01
Following the civil war of 1996–2006, there was a dramatic increase in the labor mobility of young men and the inclusion of young women in formal education, which led to the transformation of the political landscape of rural Nepal. Mobility and schooling represent a level of prestige that rural...... politics. It analyzes how formal education and mobility either challenge or reinforce traditional gendered norms which dictate a lowly position for young married women in the household and their absence from community politics. The article concludes that women are simultaneously excluded and included from...... community politics. On the one hand, their mobility and decision-making powers decrease with the increase in the labor mobility of men and their newly gained education is politically devalued when compared to the informal education that men gain through mobility, but on the other hand, schooling strengthens...
International Nuclear Information System (INIS)
Brookins, D.G.
1978-01-01
Results of a literature search of abundant data on lanthanide and actinide individual and joint systematics are presented. Covered were several papers/reports about uranium solution chemistry, uranium deposits, a natural fission reactor, rare-earch deposits, manganese nodules, bedded and dome salt deposits, and miscellaneous items. This literature search is not complete but represents efforts of seven individuals attempting to gather data relevant to the objectives defined in this report. Many foreign articles, as well as many English language articles are absent. Approximately 800 articles were inspected; 69 are included in the References cited. The data search for actinides and lanthanides in natural rocks indicated that only limited segregation of the actinides U, Np, Pu, Am, and Cm from the lanthanides is possible should high-level waste be released from canisters stored in various geomedia. Supporting this were studies of Oklo and other uranium deposits, manganese nodules, monomineralic and concretion formation rates, and actinide and lathanide transport in brines. The fact that some waste canisters may, under certain conditions, contain several critical masses of one or more actinides is countered by the facts that (a) most actinides have very short half-lives and would decay before release from canisters, (b) released actinides and lanthanides, although dispersed, would be transported and deposited as a group, thus preventing point concentration of any actinides, and (c) 235 U has a much longer half-life than the other actinides, thus allowing greater time for possible reaccumulation and criticality; such a scenario would demand that 235 U be segregated effectively from other elements in the lanthanide-actinide groups.No mechanism to do this is consistent with the natural occurrences studied or the theoretical Eh-pH diagrams considered
Self-Imposed Creativity Constraints
DEFF Research Database (Denmark)
Biskjaer, Michael Mose
2013-01-01
Abstract This dissertation epitomizes three years of research guided by the research question: how can we conceptualize creative self-binding as a resource in art and design processes? Concretely, the dissertation seeks to offer insight into the puzzling observation that highly skilled creative...... practitioners sometimes freely and intentionally impose rigid rules, peculiar principles, and other kinds of creative obstructions on themselves as a means to spur momentum in the process and reach a distinctly original outcome. To investigate this the dissertation is composed of four papers (Part II) framed...... of analysis. Informed by the insight that constraints both enable and restrain creative agency, the dissertation’s main contention is that creative self- binding may profitably be conceptualized as the exercise of self-imposed creativity constraints. Thus, the dissertation marks an analytical move from vague...
Unitarity constraints on trimaximal mixing
International Nuclear Information System (INIS)
Kumar, Sanjeev
2010-01-01
When the neutrino mass eigenstate ν 2 is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.
Macroscopic constraints on string unification
International Nuclear Information System (INIS)
Taylor, T.R.
1989-03-01
The comparison of sting theory with experiment requires a huge extrapolation from the microscopic distances, of order of the Planck length, up to the macroscopic laboratory distances. The quantum effects give rise to large corrections to the macroscopic predictions of sting unification. I discus the model-independent constraints on the gravitational sector of string theory due to the inevitable existence of universal Fradkin-Tseytlin dilatons. 9 refs
Financial Constraints and Franchising Decisions
Kai-Uwe Kuhn; Francine Lafontaine; Ying Fan
2013-01-01
We study how the financial constraints of agents affect the behavior of principals in the context of franchising. We develop an empirical model of franchising starting with a principal-agent framework that emphasizes the role of franchisees' collateral from an incentive perspective. We estimate the determinants of chains' entry (into franchising) and growth decisions using data on franchised chains and data on local macroeconomic conditions. In particular, we use collateralizable housing weal...
Analysis of Space Tourism Constraints
Bonnal, Christophe
2002-01-01
Space tourism appears today as a new Eldorado in a relatively near future. Private operators are already proposing services for leisure trips in Low Earth Orbit, and some happy few even tested them. But are these exceptional events really marking the dawn of a new space age ? The constraints associated to the space tourism are severe : - the economical balance of space tourism is tricky; development costs of large manned - the technical definition of such large vehicles is challenging, mainly when considering - the physiological aptitude of passengers will have a major impact on the mission - the orbital environment will also lead to mission constraints on aspects such as radiation, However, these constraints never appear as show-stoppers and have to be dealt with pragmatically: - what are the recommendations one can make for future research in the field of space - which typical roadmap shall one consider to develop realistically this new market ? - what are the synergies with the conventional missions and with the existing infrastructure, - how can a phased development start soon ? The paper proposes hints aiming at improving the credibility of Space Tourism and describes the orientations to follow in order to solve the major hurdles found in such an exciting development.
Infrared Constraint on Ultraviolet Theories
Energy Technology Data Exchange (ETDEWEB)
Tsai, Yuhsin [Cornell Univ., Ithaca, NY (United States)
2012-08-01
While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM–SM particle interactions using collider results with an effective coupling description.
Safety constraints applied to an adaptive Bayesian condition-based maintenance optimization model
International Nuclear Information System (INIS)
Flage, Roger; Coit, David W.; Luxhøj, James T.; Aven, Terje
2012-01-01
A model is described that determines an optimal inspection and maintenance scheme for a deteriorating unit with a stochastic degradation process with independent and stationary increments and for which the parameters are uncertain. This model and resulting maintenance plans offers some distinct benefits compared to prior research because the uncertainty of the degradation process is accommodated by a Bayesian approach and two new safety constraints have been applied to the problem: (1) with a given subjective probability (degree of belief), the limiting relative frequency of one or more failures during a fixed time interval is bounded; or (2) the subjective probability of one or more failures during a fixed time interval is bounded. In the model, the parameter(s) of a condition-based inspection scheduling function and a preventive replacement threshold are jointly optimized upon each replacement and inspection such as to minimize the expected long run cost per unit of time, but also considering one of the specified safety constraints. A numerical example is included to illustrate the effect of imposing each of the two different safety constraints.
Stochastic population dynamics under resource constraints
Energy Technology Data Exchange (ETDEWEB)
Gavane, Ajinkya S., E-mail: ajinkyagavane@gmail.com; Nigam, Rahul, E-mail: rahul.nigam@hyderabad.bits-pilani.ac.in [BITS Pilani Hyderabad Campus, Shameerpet, Hyd - 500078 (India)
2016-06-02
This paper investigates the population growth of a certain species in which every generation reproduces thrice over a period of predefined time, under certain constraints of resources needed for survival of population. We study the survival period of a species by randomizing the reproduction probabilities within a window at same predefined ages and the resources are being produced by the working force of the population at a variable rate. This randomness in the reproduction rate makes the population growth stochastic in nature and one cannot predict the exact form of evolution. Hence we study the growth by running simulations for such a population and taking an ensemble averaged over 500 to 5000 such simulations as per the need. While the population reproduces in a stochastic manner, we have implemented a constraint on the amount of resources available for the population. This is important to make the simulations more realistic. The rate of resource production then is tuned to find the rate which suits the survival of the species. We also compute the mean life time of the species corresponding to different resource production rate. Study for these outcomes in the parameter space defined by the reproduction probabilities and rate of resource production is carried out.
Non-technical constraints to eradication: the Italian experience.
Moda, Giuliana
2006-02-25
Although technical constraints to eradication of bovine tuberculosis are well-recognised, non-technical constraints can also delay progress towards eradication, leading to inefficiency and increased programme costs. This paper seeks to analyse the main non-technical constraints that can interfere with the successful implementation of tuberculosis eradication plans, based on experiences from an area of high tuberculosis prevalence in Regione Piemonte, Italy. The main social and economic constraints faced in the past 20 years are reviewed, including a social reluctance to recognise the importance of seeking eradication as the goal of disease control, effective communication of technical issues, the training and the organization of veterinary services, the relationship between the regional authority and farmers and their representatives, and data management and epidemiological reporting. The paper analyses and discusses the solutions that were applied in Regione Piemonte and the benefits that were obtained. Tuberculosis eradication plans are one of the most difficult tasks of the Veterinary Animal Health Services, and non-technical constraints must be considered when progress towards eradication is less than expected. Organizational and managerial resources can help to overcome social or economic obstacles, provided the veterinary profession is willing to address technical, but also non-technical, constraints to eradication.
Relaxations of semiring constraint satisfaction problems
CSIR Research Space (South Africa)
Leenen, L
2007-03-01
Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. In this framework preferences can be associated with tuples of values of the variable domains...
Instabilities constraint and relativistic mean field parametrization
International Nuclear Information System (INIS)
Sulaksono, A.; Kasmudin; Buervenich, T.J.; Reinhard, P.-G.; Maruhn, J.A.
2011-01-01
Two parameter sets (Set 1 and Set 2) of the standard relativistic mean field (RMF) model plus additional vector isoscalar nonlinear term, which are constrained by a set of criteria 20 determined by symmetric nuclear matter stabilities at high densities due to longitudinal and transversal particle–hole excitation modes are investigated. In the latter parameter set, δ meson and isoscalar as well as isovector tensor contributions are included. The effects in selected finite nuclei and nuclear matter properties predicted by both parameter sets are systematically studied and compared with the ones predicted by well-known RMF parameter sets. The vector isoscalar nonlinear term addition and instability constraints have reasonably good effects in the high-density properties of the isoscalar sector of nuclear matter and certain finite nuclei properties. However, even though the δ meson and isovector tensor are included, the incompatibility with the constraints from some experimental data in certain nuclear properties at saturation point and the excessive stiffness of the isovector nuclear matter equation of state at high densities as well as the incorrect isotonic trend in binding the energies of finite nuclei are still encountered. It is shown that the problem may be remedied if we introduce additional nonlinear terms not only in the isovector but also in the isoscalar vectors. (author)
An integrated optimum design approach for high speed prop-rotors including acoustic constraints
Chattopadhyay, Aditi; Wells, Valana; Mccarthy, Thomas; Han, Arris
1993-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop multilevel decomposition optimization process. The procedures involve the consideration of blade-aeroelastic aerodynamic performance, structural-dynamic design requirements, and acoustics. Further, since the design involves consideration of several different objective functions, multiobjective function formulation techniques are developed.
Theory including future not excluded
DEFF Research Database (Denmark)
Nagao, K.; Nielsen, H.B.
2013-01-01
We study a complex action theory (CAT) whose path runs over not only past but also future. We show that, if we regard a matrix element defined in terms of the future state at time T and the past state at time TA as an expectation value in the CAT, then we are allowed to have the Heisenberg equation......, Ehrenfest's theorem, and the conserved probability current density. In addition,we showthat the expectation value at the present time t of a future-included theory for large T - t and large t - T corresponds to that of a future-not-included theory with a proper inner product for large t - T. Hence, the CAT...
Revisiting big-bang nucleosynthesis constraints on long-lived decaying particles
Kawasaki, Masahiro; Kohri, Kazunori; Moroi, Takeo; Takaesu, Yoshitaro
2018-01-01
We study the effects of long-lived massive particles, which decayed during the big-bang nucleosynthesis (BBN) epoch, on the primordial abundance of light elements. Compared to previous studies, (i) the reaction rates of standard BBN reactions are updated, (ii) the most recent observational data on the light element abundance and cosmological parameters are used, (iii) the effects of the interconversion of energetic nucleons at the time of inelastic scattering with background nuclei are considered, and (iv) the effects of the hadronic shower induced by energetic high-energy antinucleons are included. We compare the theoretical predictions on the primordial abundance of light elements with the latest observational constraints, and we derive upper bounds on the relic abundance of the decaying particle as a function of its lifetime. We also apply our analysis to an unstable gravitino, the superpartner of a graviton in supersymmetric theories, and obtain constraints on the reheating temperature after inflation.
The Ambiguous Role of Constraints in Creativity
DEFF Research Database (Denmark)
Biskjær, Michael Mose; Onarheim, Balder; Wiltschnig, Stefan
2011-01-01
The relationship between creativity and constraints is often described in the literature either in rather imprecise, general concepts or in relation to very specific domains. Cross-domain and cross-disciplinary takes on how the handling of constraints influences creative activities are rare. In t......-disciplinary research into the ambiguous role of constraints in creativity....
Learning and Parallelization Boost Constraint Search
Yun, Xi
2013-01-01
Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…
A general treatment of dynamic integrity constraints
de Brock, EO
This paper introduces a general, set-theoretic model for expressing dynamic integrity constraints, i.e., integrity constraints on the state changes that are allowed in a given state space. In a managerial context, such dynamic integrity constraints can be seen as representations of "real world"
Modeling external constraints: Applying expert systems to nuclear plants
International Nuclear Information System (INIS)
Beck, C.E.; Behera, A.K.
1993-01-01
Artificial Intelligence (AI) applications in nuclear plants have received much attention over the past decade. Specific applications that have been addressed include development of models and knowledge-bases, plant maintenance, operations, procedural guidance, risk assessment, and design tools. This paper examines the issue of external constraints, with a focus on the use of Al and expert systems as design tools. It also provides several suggested methods for addressing these constraints within the Al framework. These methods include a State Matrix scheme, a layered structure for the knowledge base, and application of the dynamic parameter concept
Software-Enabled Project Management Techniques and Their Relationship to the Triple Constraints
Elleh, Festus U.
2013-01-01
This study investigated the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). There was the dearth of academic literature that focused on the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). Based on the gap…
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Relaxation of the lower frit loading constraint for DWPF process control
International Nuclear Information System (INIS)
Brown, K.G.
2000-01-01
both prediction uncertainty (including bias) and measurement uncertainty to a confidence level of 95%. However, it was discovered during Integrated DWPF Melter System (IDMS) testing that under certain conditions, DWPF glasses were prone to phase separation resulting in glasses that had noticeably unpredictable and, at times, unacceptable leaching behavior. This document details an evaluation of the continued applicability of the low frit constraint for DWPF Slurry Mix Evaporator (SME) acceptability determinations
Formal Constraints on Memory Management for Composite Overloaded Operations
Directory of Open Access Journals (Sweden)
Damian W.I. Rouson
2006-01-01
Full Text Available The memory management rules for abstract data type calculus presented by Rouson, Morris & Xu [15] are recast as formal statements in the Object Constraint Language (OCL and applied to the design of a thermal energy equation solver. One set of constraints eliminates memory leaks observed in composite overloaded expressions with three current Fortran 95/2003 compilers. A second set of constraints ensures economical memory recycling. The constraints are preconditions, postconditions and invariants on overloaded operators and the objects they receive and return. It is demonstrated that systematic run-time assertion checking inspired by the formal constraints facilitated the pinpointing of an exceptionally hard-to-reproduce compiler bug. It is further demonstrated that the interplay between OCL's modeling capabilities and Fortran's programming capabilities led to a conceptual breakthrough that greatly improved the readability of our code by facilitating operator overloading. The advantages and disadvantages of our memory management rules are discussed in light of other published solutions [11,19]. Finally, it is demonstrated that the run-time assertion checking has a negligible impact on performance.
Constraint Specialisation in Horn Clause Verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2015-01-01
We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query-answer transformation of a given set of clauses and a goal. The effect is to propagate the constraints from the goal top......-down and propagate answer constraints bottom-up. Our approach does not unfold the clauses at all; we use the constraints from the model to compute a specialised version of each clause in the program. The approach is independent of the abstract domain and the constraints theory underlying the clauses. Experimental...
Constraint specialisation in Horn clause verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2017-01-01
We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query–answer transformed version of a given set of clauses and a goal. The constraints from the model are then used to compute...... a specialised version of each clause. The effect is to propagate the constraints from the goal top-down and propagate answer constraints bottom-up. The specialisation procedure can be repeated to yield further specialisation. The approach is independent of the abstract domain and the constraint theory...
Nuclear energy and external constraints
International Nuclear Information System (INIS)
Lattes, R.; Thiriet, L.
1983-01-01
The structural factors of this crisis probably predominate over factors arising out the economic situation, even if explanations vary in this respect. In this article devoted to nuclear energy, a possible means of Loosering external constraints the current international economic environment is firstly outlined; the context in which the policies of industrialized countries, and therefore that of France, must be developed. An examination of the possible role of energy policies in general and nuclear policies in particular as an instrument of economic policy in providing a partial solution to this crisis, will then enable to quantitatively evaluate the effects of such policies at a national level [fr
Legal, ethical,and economic constraints
International Nuclear Information System (INIS)
Libassi, F.P.; Donaldson, L.F.
1980-01-01
This paper considers the legal, ethical, and economic constraints to developing a comprehensive knowledge of the biological effects of ionizing radiation. These constraints are not fixed and immutable; rather they are determined by the political process. Political issues cannot be evaded. The basic objective of developing a comprehensive knowledge about the biological effects of ionizing radiation exists as an objective not only because we wish to add to the store of human knowledge but also because we have important use for that knowledge. It will assist our decision-makers to make choices that affect us all. These choices require both hard factual information and application of political judgment. Research supplies some of the hard factual information and should be as free as possible from political influence in its execution. At the same time, the political choices that must be made influence the direction and nature of the research program as a whole. Similarly, the legal, ethical, and economic factors that constrain our ability to expand knowledge through research reflect a judgment by political agents that values other than expansion of knowledge should be recognized and given effect
Updated constraints on the cosmic string tension
International Nuclear Information System (INIS)
Battye, Richard; Moss, Adam
2010-01-01
We reexamine the constraints on the cosmic string tension from cosmic microwave background (CMB) and matter power spectra, and also from limits on a stochastic background of gravitational waves provided by pulsar timing. We discuss the different approaches to modeling string evolution and radiation. In particular, we show that the unconnected segment model can describe CMB spectra expected from thin string (Nambu) and field theory (Abelian-Higgs) simulations using the computed values for the correlation length, rms string velocity and small-scale structure relevant to each variety of simulation. Applying the computed spectra in a fit to CMB and SDSS data we find that Gμ/c 2 -7 (2σ) if the Nambu simulations are correct and Gμ/c 2 -7 in the Abelian-Higgs case. The degeneracy between Gμ/c 2 and the power spectrum slope n S is substantially reduced from previous work. Inclusion of constraints on the baryon density from big bang nucleosynthesis (BBN) imply that n S 2 and loop production size, α, we find that Gμ/c 2 -7 for αc 2 /(ΓGμ) 2 -11 /α for αc 2 /(ΓGμ)>>1.
New formulation of Horava-Lifshitz quantum gravity as a master constraint theory
Energy Technology Data Exchange (ETDEWEB)
Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China); Yang Jinsong, E-mail: Yangksong@gmail.com [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China); Yu, Hoi-Lai, E-mail: hlyu@phys.sinica.edu.tw [Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan (China)
2011-07-04
Both projectable and non-projectable versions of Horava-Lifshitz gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra scalar mode which can be problematic. A new formulation of non-projectable Horava-Lifshitz gravity, naturally realized as a representation of the master constraint algebra studied by loop quantum gravity researchers, is presented. This yields a consistent canonical theory with first class constraints. It captures the essence of Horava-Lifshitz gravity in retaining only spatial diffeomorphisms (instead of full space-time covariance) as the physically relevant non-trivial gauge symmetry; at the same time the local Hamiltonian constraint needed to eliminate the extra mode is equivalently enforced by the master constraint.
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2017-06-01
Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.
Paul, Amit K; Hase, William L
2016-01-28
A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.
Cosmological constraints on the neutron lifetime
Energy Technology Data Exchange (ETDEWEB)
Salvati, L.; Pagano, L.; Melchiorri, A. [Physics Department, Università di Roma ' ' La Sapienza' ' , Piazzale Aldo Moro 2, 00185, Rome (Italy); Consiglio, R., E-mail: laura.salvati@roma1.infn.it, E-mail: luca.pagano@roma1.infn.it, E-mail: rconsiglio@na.infn.it, E-mail: alessandro.melchiorri@roma1.infn.it [Physics Department, Università di Napoli ' ' Federico II' ' , Complesso Universitario Monte S. Angelo, Via Cintia, I-80126 Napoli (Italy)
2016-03-01
We derive new constraints on the neutron lifetime based on the recent Planck 2015 observations of temperature and polarization anisotropies of the CMB. Under the assumption of standard Big Bang Nucleosynthesis, we show that Planck data constrains the neutron lifetime to τ{sub n} = (907±69) [s] at 68% c.l.. Moreover, by including the direct measurements of primordial Helium abundance of Aver et al. (2015) and Izotov et al. (2014), we show that cosmological data provide the stringent constraints τ{sub n} = (875±19) [s] and τ{sub n} = (921±11) [s] respectively. The latter appears to be in tension with neutron lifetime value quoted by the Particle Data Group (τ{sub n} = (880.3±1.1) [s]). Future CMB surveys as COrE+, in combination with a weak lensing survey as EUCLID, could constrain the neutron lifetime up to a ∼ 6 [s] precision.
Emerging carbon constraints for corporate risk management
International Nuclear Information System (INIS)
Busch, Timo; Hoffmann, Volker H.
2007-01-01
While discussions about global sustainability challenges abound, the financial risks that they incur, albeit important, have received less attention. We suggest that corporate risk assessments should include sustainability-related aspects, especially with relation to the natural environment, and encompass the flux of critical materials within a company's value chain. Such a comprehensive risk assessment takes into account input- as well as output-related factors. With this paper, we focus on the flux of carbon and define carbon constraints that emerge due to the disposition of fossil fuels in the input dimension and due to direct and indirect climate change effects in the output dimension. We review the literature regarding the financial consequences of carbon constraints on the macroeconomic, sector, and company level. We conclude that: a) financial consequences seem to be asymmetrically distributed between and within sectors, b) the individual risk exposure of companies depends on the intensity of and dependency on carbon-based materials and energy, and c) financial markets have only started to incorporate these aspects in their valuations. This paper ends with recommendations on how to incorporate our results in an integrated carbon risk management framework. (author)
Water Constraints in an Electric Sector Capacity Expansion Model
Energy Technology Data Exchange (ETDEWEB)
Macknick, Jordan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cohen, Stuart [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Newmark, Robin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Martinez, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tidwell, Vince [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-07-17
This analysis provides a description of the first U.S. national electricity capacity expansion model to incorporate water resource availability and costs as a constraint for the future development of the electricity sector. The Regional Energy Deployment System (ReEDS) model was modified to incorporate water resource availability constraints and costs in each of its 134 Balancing Area (BA) regions along with differences in costs and efficiencies of cooling systems. Water resource availability and cost data are from recently completed research at Sandia National Laboratories (Tidwell et al. 2013b). Scenarios analyzed include a business-as-usual 3 This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. scenario without water constraints as well as four scenarios that include water constraints and allow for different cooling systems and types of water resources to be utilized. This analysis provides insight into where water resource constraints could affect the choice, configuration, or location of new electricity technologies.
Uniform discretizations: a quantization procedure for totally constrained systems including gravity
Energy Technology Data Exchange (ETDEWEB)
Campiglia, Miguel [Instituto de Fisica, Facultad de Ciencias, Igua 4225, esq. Mataojo, Montevideo (Uruguay); Di Bartolo, Cayetano [Departamento de Fisica, Universidad Simon BolIvar, Aptdo. 89000, Caracas 1080-A (Venezuela); Gambini, Rodolfo [Instituto de Fisica, Facultad de Ciencias, Igua 4225, esq. Mataojo, Montevideo (Uruguay); Pullin, Jorge [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)
2007-05-15
We present a new method for the quantization of totally constrained systems including general relativity. The method consists in constructing discretized theories that have a well defined and controlled continuum limit. The discrete theories are constraint-free and can be readily quantized. This provides a framework where one can introduce a relational notion of time and that nevertheless approximates in a well defined fashion the theory of interest. The method is equivalent to the group averaging procedure for many systems where the latter makes sense and provides a generalization otherwise. In the continuum limit it can be shown to contain, under certain assumptions, the 'master constraint' of the 'Phoenix project'. It also provides a correspondence principle with the classical theory that does not require to consider the semiclassical limit.
Time Extensions of Petri Nets for Modelling and Verification of Hard Real-Time Systems
Directory of Open Access Journals (Sweden)
Tomasz Szmuc
2002-01-01
Full Text Available The main aim ofthe paper is a presentation of time extensions of Petri nets appropriate for modelling and analysis of hard real-time systems. It is assumed, that the extensions must provide a model of time flow an ability to force a transition to fire within a stated timing constraint (the so-called the strong firing rule, and timing constraints represented by intervals. The presented survey includes extensions of classical Place/Transition Petri nets, as well as the ones applied to high-level Petri nets. An expressiveness of each time extension is illustrated using simple hard real-time system. The paper includes also a brief description of analysis and veryication methods related to the extensions, and a survey of software tools supporting modelling and analysis ofthe considered Petri nets.