WorldWideScience

Sample records for average sinr constraints

  1. Tightness of Semidefinite Programming Relaxation to Robust Transmit Beamforming with SINR Constraints

    Directory of Open Access Journals (Sweden)

    Yanjun Wang

    2013-01-01

    Full Text Available This paper considers a multiuser transmit beamforming problem under uncertain channel state information (CSI subject to SINR constraints in a downlink multiuser MISO system. A robust transmit beamforming formulation is proposed. This robust formulation is to minimize the transmission power subject to worst-case signal-to-interference-plus-noise ratio (SINR constraints on the receivers. The challenging problem is that the worst-case SINR constraints correspond to an infinite number of nonconvex quadratic constraints. In this paper, a natural semidifinite programming (SDP relaxation problem is proposed to solve the robust beamforming problem. The main contribution of this paper is to establish the tightness of the SDP relaxation problem under proper assumption, which means that the SDP relaxation problem definitely yields rank-one solutions under the assumption. Then the SDP relaxation problem provides globally optimum solutions of the primal robust transmit beamforming problem under proper assumption and norm-constrained CSI errors. Simulation results show the correctness of the proposed theoretical results and also provide a counterexample whose solutions are not rank one. The existence of counterexample shows that the guess that the solutions of the SDP relaxation problem must be rank one is wrong, except that some assumptions (such as the one proposed in this paper hold.

  2. Energy Efficiency and SINR Maximization Beamformers for Spectrum Sharing With Sensing Information

    KAUST Repository

    Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem

    2014-01-01

    an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints

  3. Near-optimal Downlink precoding of a MISO system for a secondary network under the SINR constraints of a primary network

    KAUST Repository

    Park, Kihong; Alouini, Mohamed-Slim

    2013-01-01

    -to-interference-plus-noise-ratio constraints on the primary network in order to guarantee the quality-of-service for the latter network. While the interference due to the secondary transmission in the conventional underlay CR approach may severely degrade the performance of the primary

  4. Near-optimal Downlink precoding of a MISO system for a secondary network under the SINR constraints of a primary network

    KAUST Repository

    Park, Kihong

    2013-04-01

    In this paper, we study a multiple-input single-output cognitive radio (CR) system where only the primary base station (BS) has multiple antennas. We consider a rate maximization problem of the secondary network under signal-to-interference-plus-noise-ratio constraints on the primary network in order to guarantee the quality-of-service for the latter network. While the interference due to the secondary transmission in the conventional underlay CR approach may severely degrade the performance of the primary network, we propose a primary BS-aided approach in which the primary BS helps relay the secondary users\\' signals instead of allowing them to communicate with each other via a direct path between them. In addition, an algorithm to find a near-optimal beamforming solution at the primary BS is proposed. Finally, based on some selected numerical results, we show that the proposed scheme outperforms the conventional underlay CR configuration over a wide transmit power range. © 2013 IEEE.

  5. Decentralized SINR Balancing in Cognitive Radio Networks

    KAUST Repository

    Dhifallah, Oussama Najeeb

    2016-07-07

    This paper considers the downlink of a cognitive radio (CR) network formed by multiple primary and secondary transmitters, where each multi-antenna transmitter serves a pre-known set of single-antenna users. The paper assumes that the secondary and primary transmitters can transmit simultaneously their data over the same frequency bands, so as to achieve a high system spectrum efficiency. The paper considers the downlink balancing problem of maximizing the minimum signal-to-interference-plus noise ratio (SINR) of the secondary transmitters subject to both total power constraint of the secondary transmitters, and maximum interference constraint at each primary user due to secondary transmissions. The paper proposes solving the problem using the alternating direction method of multipliers (ADMM), which leads to a distributed implementation through limited information exchange across the coupled secondary transmitters. The paper additionally proposes a solution that guarantees feasibility at each iteration. Simulation results demonstrate that the proposed solution converges to the centralized solution in a reasonable number of iterations.

  6. SINR balancing in the downlink of cognitive radio networks with imperfect channel knowledge

    KAUST Repository

    Hanif, Muhammad Fainan; Smith, Peter J.; Alouini, Mohamed-Slim

    2010-01-01

    an acceptable threshold with uncertain channel state information available at the CR base-station (BS). We optimize the beamforming vectors at the CR BS so that the worst user SINR is maximized and transmit power constraints at the CR BS and interference

  7. SINR balancing in the downlink of cognitive radio networks with imperfect channel knowledge

    KAUST Repository

    Hanif, Muhammad Fainan

    2010-01-01

    In this paper we consider the problem of signal-to-interference-plus-noise ratio (SINR) balancing in the downlink of cognitive radio (CR) networks while simultaneously keeping interference levels at primary user (PU) receivers (RXs) below an acceptable threshold with uncertain channel state information available at the CR base-station (BS). We optimize the beamforming vectors at the CR BS so that the worst user SINR is maximized and transmit power constraints at the CR BS and interference constraints at the PU RXs are satisfied. With uncertainties in the channel bounded by a Euclidean ball, the semidefinite program (SDP) modeling the balancing problem is solved using the recently developed convex iteration technique without relaxing the rank constraints. Numerical simulations are conducted to show the effectiveness of the proposed technique in comparison to known approximations.

  8. Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-06-01

    In this paper we consider a cognitive radio multi-input multi-output environment in which we adapt our beamformer to maximize both energy efficiency and signal to interference plus noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service constraints. Since the optimization of energy efficiency problem is not a convex problem, we transform it into a standard semi-definite programming (SDP) form to guarantee a global optimal solution. Analytical solution is provided for one scheme, while the other scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.

  9. Energy Efficiency and SINR Maximization Beamformers for Spectrum Sharing With Sensing Information

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-09-01

    In this paper, we consider a cognitive radio multi-input-multi-output environment, in which we adapt our beamformer to maximize both energy efficiency (EE) and signal-to-interference-plus-noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints. The analysis of the proposed schemes is classified into two categories based on knowledge of the secondary-transmitter-to-primary-receiver channel. Since the optimizations of EE and SINR problems are not convex problems, we transform them into a standard semidefinite programming (SDP) form to guarantee that the optimal solutions are global. An analytical solution is provided for one scheme, while the second scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.

  10. EFFECT OF MOBILITY ON SINR IN LONG TERM EVOLUTION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Jolly Parikh

    2016-03-01

    Full Text Available To meet the ongoing demands for high speed broadband communications, network providers are opting for the next generation of mobile technologies like LTE and LTE-Advanced. Standardized by 3GPP, these technologies aim to meet the requirements of higher data rates, low latency, and wider mobility, in varying environments without affecting the quality of service of a network. With higher mobility, the various network performance parameters like signal to interference to noise ratio, throughput, received signal strength indicator etc. get affected. This paper highlights the effect of mobility on signal to interference to noise ratio (SINR characteristics of an IMT-A system in various test environments like In-house (INH, Urban Micro (UMi, Urban Macro (UMa, Rural Macro (RMa, and Suburban Macro (SMa. Simulations have been carried out to obtain spatial plots and SINR vs CDF plots in various test environments, at different user equipment speeds, emphasizing the effects of user equipment speed on the fast fading channel gainsand SINR of the system. By varying the UE speeds from 0 km/hr to 360 km/hr there was an increase in the minimum SINR value required for acceptable performance in a system. It was observed that for given system parameters, the minimum SINR required in RMa environment increased from -5dB to 1dB, in SMa environment it increased from -6dB to -2dB, and in case of UMa environment it increased from -4dB to 1dB, when the UE speed was increased from 0km/hr to 360km/hr. To address the problem of poor SINR in high mobility systems, 3GPP has introduced the technique of Moving Relays. It is used to improve the SINR and hence the channel quality for UEs moving at high speeds in LTE systems.

  11. Generalized HARQ Protocols with Delayed Channel State Information and Average Latency Constraints

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Popovski, Petar

    2018-01-01

    In many practical wireless systems, the signal-to-interference-and-noise ratio (SINR) that is applicable to a certain transmission, referred to as channel state information (CSI), can only be learned after the transmission has taken place and is thereby delayed (outdated). In such systems, hybrid...... automatic repeat request (HARQ) protocols are often used to achieve high throughput with low latency. This paper put forth the family of expandable message space (EMS) protocols that generalize the HARQ protocol and allow for rate adaptation based on delayed CSI at the transmitter (CSIT). Assuming a block...

  12. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  13. A Framework for Control System Design Subject to Average Data-Rate Constraints

    DEFF Research Database (Denmark)

    Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2011-01-01

    This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...

  14. Max-min SINR low complexity transceiver design for single cell massive MIMO

    KAUST Repository

    Sifaou, Houssem

    2016-08-11

    This work focuses on large scale multi-user MIMO systems in which the base station (BS) outfitted with M antennas communicates with K single antenna user equipments (UEs). In particular, we aim at designing the linear precoder and receiver that maximizes the minimum signal-to-interference-plus-noise ratio (SINR) subject to a given power constraint. To gain insights into the structure of the optimal precoder and receiver as well as to reduce the computational complexity for their implementation, we analyze the asymptotic regime where M and K grow large with a given ratio and make use of random matrix theory (RMT) tools to compute accurate approximations. Although simpler, the implementation of the asymptotic precoder and receiver requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients that asymptotically solve the max-min SINR problem. Numerical results are used to show that the proposed TPE-based precoder and receiver almost achieve the same performance as the optimal ones while requiring a lower complexity.

  15. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  16. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua; Aissa, Sonia

    2012-01-01

    the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical

  17. A waveform covariancematrix for high SINR and lowside-lobe levels

    KAUST Repository

    Ahmed, Sajid; Alouini, Mohamed-Slim

    2013-01-01

    -to-interference-plus-noise ratio (SINR) compared to MIMO-radar while the gain in SINR is close to phased-array and recently proposed phased-MIMO scheme. Transmitted waveforms with the proposed covariance matrix, at the receiver, significantly supress the side-lobe levels compared

  18. Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information

    KAUST Repository

    Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem

    2014-01-01

    communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service

  19. Receive antenna selection for underlay cognitive radio with instantaneous interference constraint

    KAUST Repository

    Hanif, Muhammad Fainan

    2015-06-01

    Receive antenna selection is a low complexity scheme to reap diversity benefits.We analyze the performance of a receive antenna selection scheme in spectrum sharing systems where the antenna that results in highest signal-to-interference plus noise ratio at the secondary receiver is selected to improve the performance of secondary transmission. Exact and asymptotic behaviours of the received SINR are derived for both general and interference limited scenarios over general fading environment. These results are then applied to the outage and average bit error rate analysis when the secondary transmitter changes the transmit power in finite discrete levels to satisfy the instantaneous interference constraint at the primary receiver.

  20. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  1. Distributed Max-SINR Speech Enhancement with Ad Hoc Microphone Arrays

    DEFF Research Database (Denmark)

    Tavakoli, Vincent Mohammad; Jensen, Jesper Rindom; Heusdens, Richard

    2017-01-01

    -SINR) criterion is used with the primal-dual method of multipliers for distributed filtering. The paper investigates the convergence of the algorithm in both synchronous and asynchronous schemes, and also discusses some practical pros and cons. The applicability of the proposed method is demonstrated by means...

  2. A waveform covariancematrix for high SINR and lowside-lobe levels

    KAUST Repository

    Ahmed, Sajid

    2013-05-01

    In this work to exploit the benefits of both multiple-input multiple-output (MIMO)-radar and phased-array a waveform covariance matrix is proposed. Our analytical results show that the proposed covariance matrix yields gain in signal-to-interference-plus-noise ratio (SINR) compared to MIMO-radar while the gain in SINR is close to phased-array and recently proposed phased-MIMO scheme. Transmitted waveforms with the proposed covariance matrix, at the receiver, significantly supress the side-lobe levels compared to phased-array, MIMO-radar, and phased-MIMO schemes. Moreover, in contrast to phased-MIMO our proposed scheme allows same power transmission from each antenna. Simulation results validate the analytical results. © 2013 IEEE.

  3. Research and development for the application of radioisotope technology in SINR

    International Nuclear Information System (INIS)

    Zhang Jiahua

    1987-01-01

    A brief systematic account on the research and development for the application of radioisotope technology in Shanghai Institute of Nuclear Research (SINR) is presented. It comprehensively covers the following categories: 1. Radioisotopes produced by cyclotron; 2. Radioisotope-labelled compounds; 3. Radioisotope as source of energy converter; 4. Induced-radioisotope generation as a means for elemental analysis--the activation analysis; 5. Radioisotope equipped with electronic instrument for various application; and 6. Special usage of some radioisotopes

  4. Dual Regulation of Bacillus subtilis kinB Gene Encoding a Sporulation Trigger by SinR through Transcription Repression and Positive Stringent Transcription Control.

    Science.gov (United States)

    Fujita, Yasutaro; Ogura, Mitsuo; Nii, Satomi; Hirooka, Kazutake

    2017-01-01

    It is known that transcription of kinB encoding a trigger for Bacillus subtilis sporulation is under repression by SinR, a master repressor of biofilm formation, and under positive stringent transcription control depending on the adenine species at the transcription initiation nucleotide (nt). Deletion and base substitution analyses of the kinB promoter (P kinB ) region using lacZ fusions indicated that either a 5-nt deletion (Δ5, nt -61/-57, +1 is the transcription initiation nt) or the substitution of G at nt -45 with A (G-45A) relieved kinB repression. Thus, we found a pair of SinR-binding consensus sequences (GTTCTYT; Y is T or C) in an inverted orientation (SinR-1) between nt -57/-42, which is most likely a SinR-binding site for kinB repression. This relief from SinR repression likely requires SinI, an antagonist of SinR. Surprisingly, we found that SinR is essential for positive stringent transcription control of P kinB . Electrophoretic mobility shift assay (EMSA) analysis indicated that SinR bound not only to SinR-1 but also to SinR-2 (nt -29/-8) consisting of another pair of SinR consensus sequences in a tandem repeat arrangement; the two sequences partially overlap the '-35' and '-10' regions of P kinB . Introduction of base substitutions (T-27C C-26T) in the upstream consensus sequence of SinR-2 affected positive stringent transcription control of P kinB , suggesting that SinR binding to SinR-2 likely causes this positive control. EMSA also implied that RNA polymerase and SinR are possibly bound together to SinR-2 to form a transcription initiation complex for kinB transcription. Thus, it was suggested in this work that derepression of kinB from SinR repression by SinI induced by Spo0A∼P and occurrence of SinR-dependent positive stringent transcription control of kinB might induce effective sporulation cooperatively, implying an intimate interplay by stringent response, sporulation, and biofilm formation.

  5. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem

    2016-12-28

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio subject to a given power constraint. To this end, we consider the asymptotic regime in which M and K grow large with a given ratio. Tools from random matrix theory (RMT) are then used to compute, in closed form, accurate approximations for the parameters of the optimal precoder and receiver, when imperfect channel state information (modeled by the generic Gauss-Markov formulation form) is available at the BS. The asymptotic analysis allows us to derive the asymptotically optimal linear precoder and receiver that are characterized by a lower complexity (due to the dependence on the large scale components of the channel) and, possibly, by a better resilience to imperfect channel state information. However, the implementation of both is still challenging as it requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients on a per- UE basis that asymptotically solve the max-min SINR problem. Numerical results are used to validate the asymptotic analysis in the finite system regime and to show that the proposed TPE transceivers efficiently mimic the optimal ones, while requiring much lower computational complexity.

  6. MIMO-radar Waveform Covariance Matrices for High SINR and Low Side-lobe Levels

    KAUST Repository

    Ahmed, Sajid

    2012-12-29

    MIMO-radar has better parametric identifiability but compared to phased-array radar it shows loss in signal-to-noise ratio due to non-coherent processing. To exploit the benefits of both MIMO-radar and phased-array two transmit covariance matrices are found. Both of the covariance matrices yield gain in signal-to-interference-plus-noise ratio (SINR) compared to MIMO-radar and have lower side-lobe levels (SLL)\\'s compared to phased-array and MIMO-radar. Moreover, in contrast to recently introduced phased-MIMO scheme, where each antenna transmit different power, our proposed schemes allows same power transmission from each antenna. The SLL\\'s of the proposed first covariance matrix are higher than the phased-MIMO scheme while the SLL\\'s of the second proposed covariance matrix are lower than the phased-MIMO scheme. The first covariance matrix is generated using an auto-regressive process, which allow us to change the SINR and side lobe levels by changing the auto-regressive parameter, while to generate the second covariance matrix the values of sine function between 0 and $\\\\pi$ with the step size of $\\\\pi/n_T$ are used to form a positive-semidefinite Toeplitiz matrix, where $n_T$ is the number of transmit antennas. Simulation results validate our analytical results.

  7. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    Science.gov (United States)

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  8. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    Science.gov (United States)

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  9. Uplink transmit beamforming design for SINR maximization with full multiuser channel state information

    Science.gov (United States)

    Xi, Songnan; Zoltowski, Michael D.

    2008-04-01

    Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.

  10. Sistem national de management al resurselor digitale în stiinta si tehnologie, bazat pe structuri GRID - SINRED

    CERN Document Server

    Banciu, D

    2007-01-01

    Proiectului CEEX, SINRED ?i-a propus s? defineasc? ?i s? realizeze un sistem na?ional unitar de management al resurselor digitale în ?tiin?? ?i tehnologie bazat pe structuri GRID. Partenerii din consor?iu sunt: Universitatea din Bucure?ti; Universitatea Politehnica Bucure?ti; Universitatea Tehnic? din Cluj-Napoca; Institutul National de Informare ?i Documentar; Universitatea de Vest din Timi?oara. Problematica propus? spre rezolvare se circumscrie urm?toarelor obiective specifice: definirea ?i fundamentarea solu?iilor privind constituirea unei biblioteci digitale bazate pe re?eaua bibliotecilor universitare, publice ?i academic;definirea metodelor ?i metodologiilor de creare a unui sistem unitar la nivel na?ional în domeniul info-documentar bazat pe documente digital; analiza ?i testarea modalit??ilor de valorificare a tehnologiilor GRID în domeniul info-documentar;definirea unor proceduri de construire a bazelor de date digitale în acord cu normele ?i reglement?rile na?ionale ?i interna?ionale în domeni...

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. Constraint Differentiation

    DEFF Research Database (Denmark)

    Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca

    2010-01-01

    We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....

  13. Selection of new constraints

    International Nuclear Information System (INIS)

    Sugier, A.

    2003-01-01

    The selected new constraints should be consistent with the scale of concern i.e. be expressed roughly as fractions or multiples of the average annual background. They should take into account risk considerations and include the values of the currents limits, constraints and other action levels. The recommendation is to select four leading values for the new constraints: 500 mSv ( single event or in a decade) as a maximum value, 0.01 mSv/year as a minimum value; and two intermediate values: 20 mSv/year and 0.3 mSv/year. This new set of dose constraints, representing basic minimum standards of protection for the individuals taking into account the specificity of the exposure situations are thus coherent with the current values which can be found in ICRP Publications. A few warning need however to be noticed: There is no more multi sources limit set by ICRP. The coherence between the proposed value of dose constraint (20 mSv/year) and the current occupational dose limit of 20 mSv/year is valid only if the workers are exposed to one single source. When there is more than one source, it will be necessary to apportion. The value of 1000 mSv lifetimes used for relocation can be expressed into annual dose, which gives approximately 10 mSv/year and is coherent with the proposed dose constraint. (N.C.)

  14. Solar constraints

    International Nuclear Information System (INIS)

    Provost, J.

    1984-01-01

    Accurate tests of the theory of stellar structure and evolution are available from the Sun's observations. The solar constraints are reviewed, with a special attention to the recent progress in observing global solar oscillations. Each constraint is sensitive to a given region of the Sun. The present solar models (standard, low Z, mixed) are discussed with respect to neutrino flux, low and high degree five-minute oscillations and low degree internal gravity modes. It appears that actually there do not exist solar models able to fully account for all the observed quantities. (Auth.)

  15. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  16. The constraints

    International Nuclear Information System (INIS)

    Jones, P.M.S.

    1987-01-01

    There are considerable incentives for the use of nuclear in preference to other sources for base load electricity generation in most of the developed world. These are economic, strategic, environmental and climatic. However, there are two potential constraints which could hinder the development of nuclear power to its full economic potential. These are public opinion and financial regulations which distort the nuclear economic advantage. The concerns of the anti-nuclear lobby are over safety, (especially following the Chernobyl accident), the management of radioactive waste, the potential effects of large scale exposure of the population to radiation and weapons proliferation. These are discussed. The financial constraint is over two factors, the availability of funds and the perception of cost, both of which are discussed. (U.K.)

  17. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  18. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  19. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  20. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  1. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  2. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  4. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  5. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  6. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  7. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  8. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  9. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  10. Decentralized SINR Balancing in Cognitive Radio Networks

    KAUST Repository

    Dhifallah, Oussama Najeeb; Dahrouj, Hayssam; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    and primary transmitters can transmit simultaneously their data over the same frequency bands, so as to achieve a high system spectrum efficiency. The paper considers the downlink balancing problem of maximizing the minimum signal-to-interference-plus noise

  11. Short-sale Constraints and Credit Runs

    DEFF Research Database (Denmark)

    Venter, Gyuri

    ), creditors with high private signals are more lenient to roll over debt, and a bank with lower asset quality remains solvent. This leads to higher allocative efficiency in the real economy. My result thus implies that the decrease in average informativeness due to short-sale constraints can be more than......This paper studies how short-sale constraints affect the informational efficiency of market prices and the link between prices and economic activity. I show that under short-sale constraints security prices contain less information. However, short-sale constraints increase the informativeness...... the price of an asset the bank holds. I show that short-selling constraints in the financial market lead to the revival of self-fulfilling beliefs about the beliefs and actions of others, and create multiple equilibria. In the equilibrium where agents rely more on public information (i.e., the price...

  12. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  13. Financing Constraints and Entrepreneurship

    OpenAIRE

    William R. Kerr; Ramana Nanda

    2009-01-01

    Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...

  14. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia Posso, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...

  15. Hamiltonian constraint in polymer parametrized field theory

    International Nuclear Information System (INIS)

    Laddha, Alok; Varadarajan, Madhavan

    2011-01-01

    Recently, a generally covariant reformulation of two-dimensional flat spacetime free scalar field theory known as parametrized field theory was quantized using loop quantum gravity (LQG) type ''polymer'' representations. Physical states were constructed, without intermediate regularization structures, by averaging over the group of gauge transformations generated by the constraints, the constraint algebra being a Lie algebra. We consider classically equivalent combinations of these constraints corresponding to a diffeomorphism and a Hamiltonian constraint, which, as in gravity, define a Dirac algebra. Our treatment of the quantum constraints parallels that of LQG and obtains the following results, expected to be of use in the construction of the quantum dynamics of LQG: (i) the (triangulated) Hamiltonian constraint acts only on vertices, its construction involves some of the same ambiguities as in LQG and its action on diffeomorphism invariant states admits a continuum limit, (ii) if the regulating holonomies are in representations tailored to the edge labels of the state, all previously obtained physical states lie in the kernel of the Hamiltonian constraint, (iii) the commutator of two (density weight 1) Hamiltonian constraints as well as the operator correspondent of their classical Poisson bracket converge to zero in the continuum limit defined by diffeomorphism invariant states, and vanish on the Lewandowski-Marolf habitat, (iv) the rescaled density 2 Hamiltonian constraints and their commutator are ill-defined on the Lewandowski-Marolf habitat despite the well-definedness of the operator correspondent of their classical Poisson bracket there, (v) there is a new habitat which supports a nontrivial representation of the Poisson-Lie algebra of density 2 constraints.

  16. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...

  17. Evaluating Distributed Timing Constraints

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....

  18. Theory of Constraints (TOC)

    DEFF Research Database (Denmark)

    Michelsen, Aage U.

    2004-01-01

    Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....

  19. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  20. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  1. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  2. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  3. Constraint-based reachability

    Directory of Open Access Journals (Sweden)

    Arnaud Gotlieb

    2013-02-01

    Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.

  4. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  5. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  6. Resources, constraints and capabilities

    NARCIS (Netherlands)

    Dhondt, S.; Oeij, P.R.A.; Schröder, A.

    2018-01-01

    Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures

  7. Design with Nonlinear Constraints

    KAUST Repository

    Tang, Chengcheng

    2015-01-01

    . The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application

  8. Dynamics and causality constraints

    International Nuclear Information System (INIS)

    Sousa, Manoelito M. de

    2001-04-01

    The physical meaning and the geometrical interpretation of causality implementation in classical field theories are discussed. Causality in field theory are kinematical constraints dynamically implemented via solutions of the field equation, but in a limit of zero-distance from the field sources part of these constraints carries a dynamical content that explains old problems of classical electrodynamics away with deep implications to the nature of physicals interactions. (author)

  9. Momentum constraint relaxation

    International Nuclear Information System (INIS)

    Marronetti, Pedro

    2006-01-01

    Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly

  10. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  11. Misconceptions and constraints

    International Nuclear Information System (INIS)

    Whitten, M.; Mahon, R.

    2005-01-01

    In theory, the sterile insect technique (SIT) is applicable to a wide variety of invertebrate pests. However, in practice, the approach has been successfully applied to only a few major pests. Chapters in this volume address possible reasons for this discrepancy, e.g. Klassen, Lance and McInnis, and Robinson and Hendrichs. The shortfall between theory and practice is partly due to the persistence of some common misconceptions, but it is mainly due to one constraint, or a combination of constraints, that are biological, financial, social or political in nature. This chapter's goal is to dispel some major misconceptions, and view the constraints as challenges to overcome, seeing them as opportunities to exploit. Some of the common misconceptions include: (1) released insects retain residual radiation, (2) females must be monogamous, (3) released males must be fully sterile, (4) eradication is the only goal, (5) the SIT is too sophisticated for developing countries, and (6) the SIT is not a component of an area-wide integrated pest management (AW-IPM) strategy. The more obvious constraints are the perceived high costs of the SIT, and the low competitiveness of released sterile males. The perceived high up-front costs of the SIT, their visibility, and the lack of private investment (compared with alternative suppression measures) emerge as serious constraints. Failure to appreciate the true nature of genetic approaches, such as the SIT, may pose a significant constraint to the wider adoption of the SIT and other genetically-based tactics, e.g. transgenic genetically modified organisms (GMOs). Lack of support for the necessary underpinning strategic research also appears to be an important constraint. Hence the case for extensive strategic research in ecology, population dynamics, genetics, and insect behaviour and nutrition is a compelling one. Raising the competitiveness of released sterile males remains the major research objective of the SIT. (author)

  12. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  13. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  14. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  15. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  16. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  17. Occupational dose constraint

    International Nuclear Information System (INIS)

    Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria

    2005-01-01

    The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography

  18. Psychological constraints on egalitarianism

    DEFF Research Database (Denmark)

    Kasperbauer, Tyler Joshua

    2015-01-01

    processes motivating people to resist various aspects of egalitarianism. I argue for two theses, one normative and one descriptive. The normative thesis holds that egalitarians must take psychological constraints into account when constructing egalitarian ideals. I draw from non-ideal theories in political...... philosophy, which aim to construct moral goals with current social and political constraints in mind, to argue that human psychology must be part of a non-ideal theory of egalitarianism. The descriptive thesis holds that the most fundamental psychological challenge to egalitarian ideals comes from what......Debates over egalitarianism for the most part are not concerned with constraints on achieving an egalitarian society, beyond discussions of the deficiencies of egalitarian theory itself. This paper looks beyond objections to egalitarianism as such and investigates the relevant psychological...

  19. Constraint-based scheduling applying constraint programming to scheduling problems

    CERN Document Server

    Baptiste, Philippe; Nuijten, Wim

    2001-01-01

    Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

  20. Constraints on Dbar uplifts

    International Nuclear Information System (INIS)

    Alwis, S.P. de

    2016-01-01

    We discuss constraints on KKLT/KKLMMT and LVS scenarios that use anti-branes to get an uplift to a deSitter vacuum, coming from requiring the validity of an effective field theory description of the physics. We find these are not always satisfied or are hard to satisfy.

  1. Ecosystems emerging. 5: Constraints

    Czech Academy of Sciences Publication Activity Database

    Patten, B. C.; Straškraba, Milan; Jorgensen, S. E.

    2011-01-01

    Roč. 222, č. 16 (2011), s. 2945-2972 ISSN 0304-3800 Institutional research plan: CEZ:AV0Z50070508 Keywords : constraint * epistemic * ontic Subject RIV: EH - Ecology, Behaviour Impact factor: 2.326, year: 2011 http://www.sciencedirect.com/science/article/pii/S0304380011002274

  2. Constraints and Ambiguity

    DEFF Research Database (Denmark)

    Dove, Graham; Biskjær, Michael Mose; Lundqvist, Caroline Emilie

    2017-01-01

    groups of students building three models each. We studied groups building with traditional plastic bricks and also using a digital environment. The building tasks students undertake, and our subsequent analysis, are informed by the role constraints and ambiguity play in creative processes. Based...

  3. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  4. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  5. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  6. Graphical constraints: a graphical user interface for constraint problems

    OpenAIRE

    Vieira, Nelson Manuel Marques

    2015-01-01

    A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embrace...

  7. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas; Morvan, Jean-Marie; Alouini, Mohamed-Slim

    2015-01-01

    . Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high

  8. Distance Constraint Satisfaction Problems

    Science.gov (United States)

    Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael

    We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.

  9. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  10. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  11. Efficient Searching with Linear Constraints

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2000-01-01

    We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...

  12. Deepening Contractions and Collateral Constraints

    DEFF Research Database (Denmark)

    Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano

    and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...

  13. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  14. Design with Nonlinear Constraints

    KAUST Repository

    Tang, Chengcheng

    2015-12-10

    Most modern industrial and architectural designs need to satisfy the requirements of their targeted performance and respect the limitations of available fabrication technologies. At the same time, they should reflect the artistic considerations and personal taste of the designers, which cannot be simply formulated as optimization goals with single best solutions. This thesis aims at a general, flexible yet e cient computational framework for interactive creation, exploration and discovery of serviceable, constructible, and stylish designs. By formulating nonlinear engineering considerations as linear or quadratic expressions by introducing auxiliary variables, the constrained space could be e ciently accessed by the proposed algorithm Guided Projection, with the guidance of aesthetic formulations. The approach is introduced through applications in different scenarios, its effectiveness is demonstrated by examples that were difficult or even impossible to be computationally designed before. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application is extended to developable surfaces including origami with curved creases. Finally, general approaches to extend hard constraints and soft energies are discussed, followed by a concluding remark outlooking possible future studies.

  15. Searching for genomic constraints

    Energy Technology Data Exchange (ETDEWEB)

    Lio` , P [Cambridge, Univ. (United Kingdom). Genetics Dept.; Ruffo, S [Florence, Univ. (Italy). Fac. di Ingegneria. Dipt. di Energetica ` S. Stecco`

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call `genomic constraints` from the rules that depend on the `external natural selection` acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour.

  16. Searching for genomic constraints

    International Nuclear Information System (INIS)

    Lio', P.; Ruffo, S.

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call 'genomic constraints' from the rules that depend on the 'external natural selection' acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour

  17. Radio resource allocation over fading channels under statistical delay constraints

    CERN Document Server

    Le-Ngoc, Tho

    2017-01-01

    This SpringerBrief presents radio resource allocation schemes for buffer-aided communications systems over fading channels under statistical delay constraints in terms of upper-bounded average delay or delay-outage probability. This Brief starts by considering a source-destination communications link with data arriving at the source transmission buffer. The first scenario, the joint optimal data admission control and power allocation problem for throughput maximization is considered, where the source is assumed to have a maximum power and an average delay constraints. The second scenario, optimal power allocation problems for energy harvesting (EH) communications systems under average delay or delay-outage constraints are explored, where the EH source harvests random amounts of energy from renewable energy sources, and stores the harvested energy in a battery during data transmission. Online resource allocation algorithms are developed when the statistical knowledge of the random channel fading, data arrivals...

  18. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  19. Supergravity constraints on monojets

    International Nuclear Information System (INIS)

    Nandi, S.

    1986-01-01

    In the standard model, supplemented by N = 1 minimal supergravity, all the supersymmetric particle masses can be expressed in terms of a few unknown parameters. The resulting mass relations, and the laboratory and the cosmological bounds on these superpartner masses are used to put constraints on the supersymmetric origin of the CERN monojets. The latest MAC data at PEP excludes the scalar quarks, of masses up to 45 GeV, as the origin of these monojets. The cosmological bounds, for a stable photino, excludes the mass range necessary for the light gluino-heavy squark production interpretation. These difficulties can be avoided by going beyond the minimal supergravity theory. Irrespective of the monojets, the importance of the stable γ as the source of the cosmological dark matter is emphasized

  20. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...

  1. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  2. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  3. Modifier constraint in alkali borophosphate glasses using topological constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiang [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zeng, Huidan, E-mail: hdzeng@ecust.edu.cn [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Jiang, Qi [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zhao, Donghui [Unifrax Corporation, Niagara Falls, NY 14305 (United States); Chen, Guorong [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Wang, Zhaofeng; Sun, Luyi [Department of Chemical & Biomolecular Engineering and Polymer Program, Institute of Materials Science, University of Connecticut, Storrs, CT 06269 (United States); Chen, Jianding [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China)

    2016-12-01

    In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.

  4. Seismological Constraints on Geodynamics

    Science.gov (United States)

    Lomnitz, C.

    2004-12-01

    Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over

  5. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  6. Observational constraints on interstellar chemistry

    International Nuclear Information System (INIS)

    Winnewisser, G.

    1984-01-01

    The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)

  7. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  8. Fixed Costs and Hours Constraints

    Science.gov (United States)

    Johnson, William R.

    2011-01-01

    Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…

  9. An Introduction to 'Creativity Constraints'

    DEFF Research Database (Denmark)

    Onarheim, Balder; Biskjær, Michael Mose

    2013-01-01

    Constraints play a vital role as both restrainers and enablers in innovation processes by governing what the creative agent/s can and cannot do, and what the output can and cannot be. Notions of constraints are common in creativity research, but current contributions are highly dispersed due to n...

  10. Constraint Programming for Context Comprehension

    DEFF Research Database (Denmark)

    Christiansen, Henning

    2014-01-01

    A close similarity is demonstrated between context comprehension, such as discourse analysis, and constraint programming. The constraint store takes the role of a growing knowledge base learned throughout the discourse, and a suitable con- straint solver does the job of incorporating new pieces...

  11. Linear-constraint wavefront control for exoplanet coronagraphic imaging systems

    Science.gov (United States)

    Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean

    2017-01-01

    A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.

  12. Natural Constraints to Species Diversification.

    Directory of Open Access Journals (Sweden)

    Eric Lewitus

    2016-08-01

    Full Text Available Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the

  13. Natural Constraints to Species Diversification.

    Science.gov (United States)

    Lewitus, Eric; Morlon, Hélène

    2016-08-01

    Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of

  14. Natural Constraints to Species Diversification

    Science.gov (United States)

    Lewitus, Eric; Morlon, Hélène

    2016-01-01

    Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of

  15. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  16. Vocabulary Constraint on Texts

    Directory of Open Access Journals (Sweden)

    C. Sutarsyah

    2008-01-01

    Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.  It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.

  17. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  18. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  19. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  20. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  1. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  2. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  3. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  4. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  5. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  6. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  7. Machine tongues. X. Constraint languages

    Energy Technology Data Exchange (ETDEWEB)

    Levitt, D.

    Constraint languages and programming environments will help the designer produce a lucid description of a problem domain, and then of particular situations and problems in it. Early versions of these languages were given descriptions of real world domain constraints, like the operation of electrical and mechanical parts. More recently, the author has automated a vocabulary for describing musical jazz phrases, using constraint language as a jazz improviser. General constraint languages will handle all of these domains. Once the model is in place, the system will connect built-in code fragments and algorithms to answer questions about situations; that is, to help solve problems. Bugs will surface not in code, but in designs themselves. 15 references.

  8. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  9. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  10. Fluid convection, constraint and causation

    Science.gov (United States)

    Bishop, Robert C.

    2012-01-01

    Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955

  11. Receive antenna selection for underlay cognitive radio with instantaneous interference constraint

    KAUST Repository

    Hanif, Muhammad Fainan; Yang, Hongchuan; Alouini, Mohamed-Slim

    2015-01-01

    . These results are then applied to the outage and average bit error rate analysis when the secondary transmitter changes the transmit power in finite discrete levels to satisfy the instantaneous interference constraint at the primary receiver.

  12. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  13. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  14. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  16. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  17. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  18. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  20. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  1. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  3. Developmental constraints on behavioural flexibility.

    Science.gov (United States)

    Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E

    2013-05-19

    We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.

  4. Data assimilation with inequality constraints

    Science.gov (United States)

    Thacker, W. C.

    If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.

  5. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  6. Constraint programming and decision making

    CERN Document Server

    Kreinovich, Vladik

    2014-01-01

    In many application areas, it is necessary to make effective decisions under constraints. Several area-specific techniques are known for such decision problems; however, because these techniques are area-specific, it is not easy to apply each technique to other applications areas. Cross-fertilization between different application areas is one of the main objectives of the annual International Workshops on Constraint Programming and Decision Making. Those workshops, held in the US (El Paso, Texas), in Europe (Lyon, France), and in Asia (Novosibirsk, Russia), from 2008 to 2012, have attracted researchers and practitioners from all over the world. This volume presents extended versions of selected papers from those workshops. These papers deal with all stages of decision making under constraints: (1) formulating the problem of multi-criteria decision making in precise terms, (2) determining when the corresponding decision problem is algorithmically solvable; (3) finding the corresponding algorithms, and making...

  7. Stochastic population dynamics under resource constraints

    Energy Technology Data Exchange (ETDEWEB)

    Gavane, Ajinkya S., E-mail: ajinkyagavane@gmail.com; Nigam, Rahul, E-mail: rahul.nigam@hyderabad.bits-pilani.ac.in [BITS Pilani Hyderabad Campus, Shameerpet, Hyd - 500078 (India)

    2016-06-02

    This paper investigates the population growth of a certain species in which every generation reproduces thrice over a period of predefined time, under certain constraints of resources needed for survival of population. We study the survival period of a species by randomizing the reproduction probabilities within a window at same predefined ages and the resources are being produced by the working force of the population at a variable rate. This randomness in the reproduction rate makes the population growth stochastic in nature and one cannot predict the exact form of evolution. Hence we study the growth by running simulations for such a population and taking an ensemble averaged over 500 to 5000 such simulations as per the need. While the population reproduces in a stochastic manner, we have implemented a constraint on the amount of resources available for the population. This is important to make the simulations more realistic. The rate of resource production then is tuned to find the rate which suits the survival of the species. We also compute the mean life time of the species corresponding to different resource production rate. Study for these outcomes in the parameter space defined by the reproduction probabilities and rate of resource production is carried out.

  8. Design constraints for electron-positron linear colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.

    1991-01-01

    A prescription for examining the design constraints in the e + -e - linear collider is presented. By specifying limits on certain key quantities, an allowed region of parameter space can be presented, hopefully clarifying some of the design options. The model starts with the parameters at the interaction point (IP), where the expressions for the luminosity, the disruption parameter, beamstrahlung, and average beam power constitute four relations among eleven IP parameters. By specifying the values of five of these quantities, and using these relationships, the unknown parameter space can be reduced to a two-dimensional space. Curves of constraint can be plotted in this space to define an allowed operating region. An accelerator model, based on a modified, scaled SLAC structure, can then be used to derive the corresponding parameter space including the constraints derived from power consumption and wake field effects. The results show that longer, lower gradient accelerators are advantageous

  9. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  10. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  11. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  13. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  14. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  15. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  16. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  17. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  18. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  19. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  20. Constraint elimination in dynamical systems

    Science.gov (United States)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  1. Constraint Programming versus Mathematical Programming

    DEFF Research Database (Denmark)

    Hansen, Jesper

    2003-01-01

    Constraint Logic Programming (CLP) is a relatively new technique from the 80's with origins in Computer Science and Artificial Intelligence. Lately, much research have been focused on ways of using CLP within the paradigm of Operations Research (OR) and vice versa. The purpose of this paper...

  2. Sterile neutrino constraints from cosmology

    DEFF Research Database (Denmark)

    Hamann, Jan; Hannestad, Steen; Raffelt, Georg G.

    2012-01-01

    The presence of light particles beyond the standard model's three neutrino species can profoundly impact the physics of decoupling and primordial nucleosynthesis. I review the observational signatures of extra light species, present constraints from recent data, and discuss the implications of po...... of possible sterile neutrinos with O(eV)-masses for cosmology....

  3. Intertemporal consumption and credit constraints

    DEFF Research Database (Denmark)

    Leth-Petersen, Søren

    2010-01-01

    There is continuing controversy over the importance of credit constraints. This paper investigates whether total household expenditure and debt is affected by an exogenous increase in access to credit provided by a credit market reform that enabled Danish house owners to use housing equity...

  4. Financial Constraints: Explaining Your Position.

    Science.gov (United States)

    Cargill, Jennifer

    1988-01-01

    Discusses the importance of educating library patrons about the library's finances and the impact of budget constraints and the escalating cost of serials on materials acquisition. Steps that can be taken in educating patrons by interpreting and publicizing financial information are suggested. (MES)

  5. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  6. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  7. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  8. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  9. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  10. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  11. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  12. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  13. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  14. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  15. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  16. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  17. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  18. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  19. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  20. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  1. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  2. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  3. Constraints on cosmological parameters in power-law cosmology

    International Nuclear Information System (INIS)

    Rani, Sarita; Singh, J.K.; Altaibayeva, A.; Myrzakulov, R.; Shahalam, M.

    2015-01-01

    In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H 0 (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not with H(z) data. However, the constraints obtained on and i.e. H 0 average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details

  4. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  5. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  6. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  7. Complementary Set Matrices Satisfying a Column Correlation Constraint

    OpenAIRE

    Wu, Di; Spasojevic, Predrag

    2006-01-01

    Motivated by the problem of reducing the peak to average power ratio (PAPR) of transmitted signals, we consider a design of complementary set matrices whose column sequences satisfy a correlation constraint. The design algorithm recursively builds a collection of $2^{t+1}$ mutually orthogonal (MO) complementary set matrices starting from a companion pair of sequences. We relate correlation properties of column sequences to that of the companion pair and illustrate how to select an appropriate...

  8. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  9. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  10. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  11. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  12. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  13. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  14. Creativity from Constraints in Engineering Design

    DEFF Research Database (Denmark)

    Onarheim, Balder

    2012-01-01

    This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...

  15. A compendium of chameleon constraints

    International Nuclear Information System (INIS)

    Burrage, Clare; Sakstein, Jeremy

    2016-01-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  16. A compendium of chameleon constraints

    Energy Technology Data Exchange (ETDEWEB)

    Burrage, Clare [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104 (United States)

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  17. Self-Imposed Creativity Constraints

    DEFF Research Database (Denmark)

    Biskjaer, Michael Mose

    2013-01-01

    Abstract This dissertation epitomizes three years of research guided by the research question: how can we conceptualize creative self-binding as a resource in art and design processes? Concretely, the dissertation seeks to offer insight into the puzzling observation that highly skilled creative...... practitioners sometimes freely and intentionally impose rigid rules, peculiar principles, and other kinds of creative obstructions on themselves as a means to spur momentum in the process and reach a distinctly original outcome. To investigate this the dissertation is composed of four papers (Part II) framed...... of analysis. Informed by the insight that constraints both enable and restrain creative agency, the dissertation’s main contention is that creative self- binding may profitably be conceptualized as the exercise of self-imposed creativity constraints. Thus, the dissertation marks an analytical move from vague...

  18. Unitarity constraints on trimaximal mixing

    International Nuclear Information System (INIS)

    Kumar, Sanjeev

    2010-01-01

    When the neutrino mass eigenstate ν 2 is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.

  19. Macroscopic constraints on string unification

    International Nuclear Information System (INIS)

    Taylor, T.R.

    1989-03-01

    The comparison of sting theory with experiment requires a huge extrapolation from the microscopic distances, of order of the Planck length, up to the macroscopic laboratory distances. The quantum effects give rise to large corrections to the macroscopic predictions of sting unification. I discus the model-independent constraints on the gravitational sector of string theory due to the inevitable existence of universal Fradkin-Tseytlin dilatons. 9 refs

  20. Financial Constraints and Franchising Decisions

    OpenAIRE

    Kai-Uwe Kuhn; Francine Lafontaine; Ying Fan

    2013-01-01

    We study how the financial constraints of agents affect the behavior of principals in the context of franchising. We develop an empirical model of franchising starting with a principal-agent framework that emphasizes the role of franchisees' collateral from an incentive perspective. We estimate the determinants of chains' entry (into franchising) and growth decisions using data on franchised chains and data on local macroeconomic conditions. In particular, we use collateralizable housing weal...

  1. Analysis of Space Tourism Constraints

    Science.gov (United States)

    Bonnal, Christophe

    2002-01-01

    Space tourism appears today as a new Eldorado in a relatively near future. Private operators are already proposing services for leisure trips in Low Earth Orbit, and some happy few even tested them. But are these exceptional events really marking the dawn of a new space age ? The constraints associated to the space tourism are severe : - the economical balance of space tourism is tricky; development costs of large manned - the technical definition of such large vehicles is challenging, mainly when considering - the physiological aptitude of passengers will have a major impact on the mission - the orbital environment will also lead to mission constraints on aspects such as radiation, However, these constraints never appear as show-stoppers and have to be dealt with pragmatically: - what are the recommendations one can make for future research in the field of space - which typical roadmap shall one consider to develop realistically this new market ? - what are the synergies with the conventional missions and with the existing infrastructure, - how can a phased development start soon ? The paper proposes hints aiming at improving the credibility of Space Tourism and describes the orientations to follow in order to solve the major hurdles found in such an exciting development.

  2. Infrared Constraint on Ultraviolet Theories

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Yuhsin [Cornell Univ., Ithaca, NY (United States)

    2012-08-01

    While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM–SM particle interactions using collider results with an effective coupling description.

  3. Isocurvature constraints on portal couplings

    Energy Technology Data Exchange (ETDEWEB)

    Kainulainen, Kimmo; Nurmi, Sami; Vaskonen, Ville [Department of Physics, University of Jyväskylä, P.O.Box 35 (YFL), FI-40014 University of Jyväskylä (Finland); Tenkanen, Tommi; Tuominen, Kimmo, E-mail: kimmo.kainulainen@jyu.fi, E-mail: sami.t.nurmi@jyu.fi, E-mail: tommi.tenkanen@helsinki.fi, E-mail: kimmo.i.tuominen@helsinki.fi, E-mail: ville.vaskonen@jyu.fi [Department of Physics, University of Helsinki P.O. Box 64, FI-00014, Helsinki (Finland)

    2016-06-01

    We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We then use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: m {sub DM}/GeV ∼< 0.2λ{sub s}{sup 3/8} ( H {sub *}/10{sup 11} GeV){sup −3/2}. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.

  4. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  5. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  6. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  7. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  8. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  9. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  10. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  11. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  12. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  13. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  14. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  15. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  16. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  17. Relaxations of semiring constraint satisfaction problems

    CSIR Research Space (South Africa)

    Leenen, L

    2007-03-01

    Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. In this framework preferences can be associated with tuples of values of the variable domains...

  18. Transmission and capacity pricing and constraints

    International Nuclear Information System (INIS)

    Fusco, M.

    1999-01-01

    A series of overhead viewgraphs accompanied this presentation which discussed the following issues regarding the North American electric power industry: (1) capacity pricing transmission constraints, (2) nature of transmission constraints, (3) consequences of transmission constraints, and (4) prices as market evidence. Some solutions suggested for pricing constraints included the development of contingent contracts, back-up power in supply regions, and new line capacity construction. 8 tabs., 20 figs

  19. Ant colony optimization and constraint programming

    CERN Document Server

    Solnon, Christine

    2013-01-01

    Ant colony optimization is a metaheuristic which has been successfully applied to a wide range of combinatorial optimization problems. The author describes this metaheuristic and studies its efficiency for solving some hard combinatorial problems, with a specific focus on constraint programming. The text is organized into three parts. The first part introduces constraint programming, which provides high level features to declaratively model problems by means of constraints. It describes the main existing approaches for solving constraint satisfaction problems, including complete tree search

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  1. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  2. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  3. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  4. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  5. The Ambiguous Role of Constraints in Creativity

    DEFF Research Database (Denmark)

    Biskjær, Michael Mose; Onarheim, Balder; Wiltschnig, Stefan

    2011-01-01

    The relationship between creativity and constraints is often described in the literature either in rather imprecise, general concepts or in relation to very specific domains. Cross-domain and cross-disciplinary takes on how the handling of constraints influences creative activities are rare. In t......-disciplinary research into the ambiguous role of constraints in creativity....

  6. Learning and Parallelization Boost Constraint Search

    Science.gov (United States)

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  7. A general treatment of dynamic integrity constraints

    NARCIS (Netherlands)

    de Brock, EO

    This paper introduces a general, set-theoretic model for expressing dynamic integrity constraints, i.e., integrity constraints on the state changes that are allowed in a given state space. In a managerial context, such dynamic integrity constraints can be seen as representations of "real world"

  8. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  9. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  10. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  11. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  12. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  13. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  14. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  15. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  16. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  17. Secret-Key Agreement with Public Discussion subject to an Amplitude Constraint

    KAUST Repository

    Zorgui, Marwen; Rezki, Zouheir; Alomair, Basel; Alouini, Mohamed-Slim

    2016-01-01

    This paper considers the problem of secret-key agreement with public discussion subject to a peak power constraint A on the channel input. The optimal input distribution is proved to be discrete with finite support. To overcome the computationally heavy search for the optimal discrete distribution, several suboptimal schemes are proposed and shown numerically to perform close to the capacity. Moreover, lower and upper bounds for the secret-key capacity are provided and used to prove that the secret-key capacity converges for asymptotic high values of A, to the secret-key capacity with an average power constraint A2. Finally, when the amplitude constraint A is small (A ! 0), the secret-key capacity is proved to be asymptotically equal to the capacity of the legitimate user with an amplitude constraint A and no secrecy constraint.

  18. Secret-Key Agreement with Public Discussion subject to an Amplitude Constraint

    KAUST Repository

    Zorgui, Marwen

    2016-04-06

    This paper considers the problem of secret-key agreement with public discussion subject to a peak power constraint A on the channel input. The optimal input distribution is proved to be discrete with finite support. To overcome the computationally heavy search for the optimal discrete distribution, several suboptimal schemes are proposed and shown numerically to perform close to the capacity. Moreover, lower and upper bounds for the secret-key capacity are provided and used to prove that the secret-key capacity converges for asymptotic high values of A, to the secret-key capacity with an average power constraint A2. Finally, when the amplitude constraint A is small (A ! 0), the secret-key capacity is proved to be asymptotically equal to the capacity of the legitimate user with an amplitude constraint A and no secrecy constraint.

  19. Constraint Specialisation in Horn Clause Verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2015-01-01

    We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query-answer transformation of a given set of clauses and a goal. The effect is to propagate the constraints from the goal top......-down and propagate answer constraints bottom-up. Our approach does not unfold the clauses at all; we use the constraints from the model to compute a specialised version of each clause in the program. The approach is independent of the abstract domain and the constraints theory underlying the clauses. Experimental...

  20. Constraint specialisation in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2017-01-01

    We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query–answer transformed version of a given set of clauses and a goal. The constraints from the model are then used to compute...... a specialised version of each clause. The effect is to propagate the constraints from the goal top-down and propagate answer constraints bottom-up. The specialisation procedure can be repeated to yield further specialisation. The approach is independent of the abstract domain and the constraint theory...

  1. Nuclear energy and external constraints

    International Nuclear Information System (INIS)

    Lattes, R.; Thiriet, L.

    1983-01-01

    The structural factors of this crisis probably predominate over factors arising out the economic situation, even if explanations vary in this respect. In this article devoted to nuclear energy, a possible means of Loosering external constraints the current international economic environment is firstly outlined; the context in which the policies of industrialized countries, and therefore that of France, must be developed. An examination of the possible role of energy policies in general and nuclear policies in particular as an instrument of economic policy in providing a partial solution to this crisis, will then enable to quantitatively evaluate the effects of such policies at a national level [fr

  2. [Environmental efficiency evaluation under carbon emission constraint in Western China].

    Science.gov (United States)

    Rong, Jian-bo; Yan, Li-jiao; Huang, Shao-rong; Zhang, Ge

    2015-06-01

    This research used the SBM model based on undesirable outputs to measure the static environmental efficiency of Western China under carbon emission constraint from 2000 to 2012. The researchers also utilized the Malmquist index to further analyze the change tendency of environmental efficiency. Additionally, Tobit regression analysis was used to study the factors relevant to environmental efficiency. Practical solutions to improve environmental quality in Western China were put forward. The study showed that in Western China, environmental efficiency with carbon emission constraint was significantly lower than that without carbon emission constraint, and the difference could be described as an inverse U-shaped curve which increased at first and then decreased. Guang-xi and Inner Mongolia, the two provinces met the effective environmental efficiency levels all the time under carbon emission constraint. However, the five provinces of Guizhou, Gansu, Qinghai, Ningxia and Xinjiang did not. Furthermore, Ningxia had the lowest level of environmental efficiency, with a score between 0.281-0.386. Although the environmental efficiency of most provinces was currently at an ineffective level, the environmental efficiency quality was gradually improving at an average speed of 6.6%. Excessive CO2 emission and a large amount of energy consumption were the primary factors causing environmental inefficiency in Western China, and energy intensity had the most negative impact on the environmental efficiency. The increase of import and export trade reduced the environmental efficiency significantly in Western China, while the increase of foreign direct investment had a positive effect on its environmental efficiency.

  3. Developmental constraint of insect audition

    Directory of Open Access Journals (Sweden)

    Strauß Johannes

    2006-12-01

    Full Text Available Abstract Background Insect ears contain very different numbers of sensory cells, from only one sensory cell in some moths to thousands of sensory cells, e.g. in cicadas. These differences still await functional explanation and especially the large numbers in cicadas remain puzzling. Insects of the different orders have distinct developmental sequences for the generation of auditory organs. These sensory cells might have different functions depending on the developmental stages. Here we propose that constraints arising during development are also important for the design of insect ears and might influence cell numbers of the adults. Presentation of the hypothesis We propose that the functional requirements of the subadult stages determine the adult complement of sensory units in the auditory system of cicadas. The hypothetical larval sensory organ should function as a vibration receiver, representing a functional caenogenesis. Testing the hypothesis Experiments at different levels have to be designed to test the hypothesis. Firstly, the neuroanatomy of the larval sense organ should be analyzed to detail. Secondly, the function should be unraveled neurophysiologically and behaviorally. Thirdly, the persistence of the sensory cells and the rebuilding of the sensory organ to the adult should be investigated. Implications of the hypothesis Usually, the evolution of insect ears is viewed with respect to physiological and neuronal mechanisms of sound perception. This view should be extended to the development of sense organs. Functional requirements during postembryonic development may act as constraints for the evolution of adult organs, as exemplified with the auditory system of cicadas.

  4. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  5. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  6. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  7. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  8. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  9. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  10. Precoding Design for Single-RF Massive MIMO Systems: A Large System Analysis

    KAUST Repository

    Sifaou, Houssem

    2016-08-26

    This work revisits a recently proposed precoding design for massive multiple-input multiple output (MIMO) systems that is based on the use of an instantaneous total power constraint. The main advantages of this technique lie in its suitability to the recently proposed single radio frequency (RF) MIMO transmitter coupled with a very-high power efficiency. Such features have been proven using simulations for uncorrelated channels. Based on tools from random matrix theory, we propose in this work to analyze the performance of this precoder for more involved channels accounting for spatial correlation. The obtained expressions are then optimized in order to maximize the signalto- interference-plus-noise ratio (SINR). Simulation results are provided in order to illustrate the performance of the optimized precoder in terms of peak-to-average power ratio (PAPR) and signal-to-interference-plus-noise ratio (SINR). © 2012 IEEE.

  11. Thermomechanical constraints and constitutive formulations in thermoelasticity

    Directory of Open Access Journals (Sweden)

    Baek S.

    2003-01-01

    Full Text Available We investigate three classes of constraints in a thermoelastic body: (i a deformation-temperature constraint, (ii a deformation-entropy constraint, and (iii a deformation-energy constraint. These constraints are obtained as limits of unconstrained thermoelastic materials and we show that constraints (ii and (iii are equivalent. By using a limiting procedure, we show that for the constraint (i, the entropy plays the role of a Lagrange multiplier while for (ii and (iii, the absolute temperature plays the role of Lagrange multiplier. We further demonstrate that the governing equations for materials subject to constraint (i are identical to those of an unconstrained material whose internal energy is an affine function of the entropy, while those for materials subject to constraints (ii and (iii are identical to those of an unstrained material whose Helmholtz potential is affine in the absolute temperature. Finally, we model the thermoelastic response of a peroxide-cured vulcanizate of natural rubber and show that imposing the constraint in which the volume change depends only on the internal energy leads to very good predictions (compared to experimental results of the stress and temperature response under isothermal and isentropic conditions.

  12. Constraints on communication in classrooms for the deaf.

    Science.gov (United States)

    Matthews, T J; Reich, C F

    1993-03-01

    One explanation for the relatively low scholastic achievement of deaf students is the character of communication in the classroom. Unlike aural communication methods, line-of-sight methods share the limitation that the receiver of the message must look at the sender. To assess the magnitude of this constraint, we measured the amount of time signers were looked at by potential receivers in typical secondary school classes for the deaf. Videotaped segments indicated that on average the messages sent by teachers and students were seen less than half the time. Students frequently engaged in collateral conversations. The constraints of line-of-sight communication are profound and should be addressed by teaching techniques, classroom layout, and possibly, the use of computer-communication technology.

  13. A RED modified weighted moving average for soft real-time application

    Directory of Open Access Journals (Sweden)

    Domanśka Joanna

    2014-09-01

    Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation

  14. From physical dose constraints to equivalent uniform dose constraints in inverse radiotherapy planning

    International Nuclear Information System (INIS)

    Thieke, Christian; Bortfeld, Thomas; Niemierko, Andrzej; Nill, Simeon

    2003-01-01

    Optimization algorithms in inverse radiotherapy planning need information about the desired dose distribution. Usually the planner defines physical dose constraints for each structure of the treatment plan, either in form of minimum and maximum doses or as dose-volume constraints. The concept of equivalent uniform dose (EUD) was designed to describe dose distributions with a higher clinical relevance. In this paper, we present a method to consider the EUD as an optimization constraint by using the method of projections onto convex sets (POCS). In each iteration of the optimization loop, for the actual dose distribution of an organ that violates an EUD constraint a new dose distribution is calculated that satisfies the EUD constraint, leading to voxel-based physical dose constraints. The new dose distribution is found by projecting the current one onto the convex set of all dose distributions fulfilling the EUD constraint. The algorithm is easy to integrate into existing inverse planning systems, and it allows the planner to choose between physical and EUD constraints separately for each structure. A clinical case of a head and neck tumor is optimized using three different sets of constraints: physical constraints for all structures, physical constraints for the target and EUD constraints for the organs at risk, and EUD constraints for all structures. The results show that the POCS method converges stable and given EUD constraints are reached closely

  15. Metric approach to quantum constraints

    International Nuclear Information System (INIS)

    Brody, Dorje C; Hughston, Lane P; Gustavsson, Anna C T

    2009-01-01

    A framework for deriving equations of motion for constrained quantum systems is introduced and a procedure for its implementation is outlined. In special cases, the proposed new method, which takes advantage of the fact that the space of pure states in quantum mechanics has both a symplectic structure and a metric structure, reduces to a quantum analogue of the Dirac theory of constraints in classical mechanics. Explicit examples involving spin-1/2 particles are worked out in detail: in the first example, our approach coincides with a quantum version of the Dirac formalism, while the second example illustrates how a situation that cannot be treated by Dirac's approach can nevertheless be dealt with in the present scheme.

  16. Physical activity participation and constraints among athletic training students.

    Science.gov (United States)

    Stanek, Justin; Rogers, Katherine; Anderson, Jordan

    2015-02-01

    Researchers have examined the physical activity (PA) habits of certified athletic trainers; however, none have looked specifically at athletic training students. To assess PA participation and constraints to participation among athletic training students. Cross-sectional study. Entry-level athletic training education programs (undergraduate and graduate) across the United States. Participants were 1125 entry-level athletic training students. Self-reported PA participation, including a calculated PA index based on a typical week. Leisure constraints and demographic data were also collected. Only 22.8% (252/1105) of athletic training students were meeting the American College of Sports Medicine recommendations for PA through moderate-intensity cardiorespiratory exercise. Although 52.3% (580/1105) were meeting the recommendations through vigorous-intensity cardiorespiratory exercise, 60.5% (681/1125) were meeting the recommendations based on the combined total of moderate or vigorous cardiorespiratory exercise. In addition, 57.2% (643/1125) of respondents met the recommendations for resistance exercise. Exercise habits of athletic training students appear to be better than the national average and similar to those of practicing athletic trainers. Students reported structural constraints such as lack of time due to work or studies as the most significant barrier to exercise participation. Athletic training students experienced similar constraints to PA participation as practicing athletic trainers, and these constraints appeared to influence their exercise participation during their entry-level education. Athletic training students may benefit from a greater emphasis on work-life balance during their entry-level education to promote better health and fitness habits.

  17. Cosmographic Constraints and Cosmic Fluids

    Directory of Open Access Journals (Sweden)

    Salvatore Capozziello

    2013-12-01

    Full Text Available The problem of reproducing dark energy effects is reviewed here with particular interest devoted to cosmography. We summarize some of the most relevant cosmological models, based on the assumption that the corresponding barotropic equations of state evolve as the universe expands, giving rise to the accelerated expansion. We describe in detail the ΛCDM (Λ-Cold Dark Matter and ωCDM models, considering also some specific examples, e.g., Chevallier–Polarsky–Linder, the Chaplygin gas and the Dvali–Gabadadze–Porrati cosmological model. Finally, we consider the cosmological consequences of f(R and f(T gravities and their impact on the framework of cosmography. Keeping these considerations in mind, we point out the model-independent procedure related to cosmography, showing how to match the series of cosmological observables to the free parameters of each model. We critically discuss the role played by cosmography, as a selection criterion to check whether a particular model passes or does not present cosmological constraints. In so doing, we find out cosmological bounds by fitting the luminosity distance expansion of the redshift, z, adopting the recent Union 2.1 dataset of supernovae, combined with the baryonic acoustic oscillation and the cosmic microwave background measurements. We perform cosmographic analyses, imposing different priors on the Hubble rate present value. In addition, we compare our results with recent PLANCK limits, showing that the ΛCDM and ωCDM models seem to be the favorite with respect to other dark energy models. However, we show that cosmographic constraints on f(R and f(T cannot discriminate between extensions of General Relativity and dark energy models, leading to a disadvantageous degeneracy problem.

  18. Causality Constraints in Conformal Field Theory

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d-dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well known sign constraint on the (∂φ)4 coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. Our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinni...

  19. Constraint Embedding for Multibody System Dynamics

    Science.gov (United States)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  20. Use of dose constraints in public exposure

    International Nuclear Information System (INIS)

    Tageldein, Amged

    2015-02-01

    An overview of the dose constraints in public exposures has been carried out in this project. The establishment, development and the application of the concept of dose constraints are reviewed with regards to public exposure. The role of dose constraints in the process of optimization of radiation protection was described and has been showed that the concept of the dose constraints along with many other concept of radiation protection is widely applied in the optimization of exposure to radiation. From the beginning of the establishment of dose constraints as a concept in radiation protection, the International Commission of Radiological Protection (ICRP) has published a number of documents that provides detailed application related to radiation protection and safety of public exposure from ionizing radiation. This work provides an overview of such publications and related documents with special emphasis on optimization of public exposure using dose constraints. (au)

  1. Causality constraints in conformal field theory

    Energy Technology Data Exchange (ETDEWEB)

    Hartman, Thomas; Jain, Sachin; Kundu, Sandipan [Department of Physics, Cornell University,Ithaca, New York (United States)

    2016-05-17

    Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well known sign constraint on the (∂ϕ){sup 4} coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. Our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinning operators.

  2. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  3. Constraint-based Word Segmentation for Chinese

    DEFF Research Database (Denmark)

    Christiansen, Henning; Bo, Li

    2014-01-01

    -hoc and statistically based methods. In this paper, we show experiments of implementing different approaches to CWSP in the framework of CHR Grammars [Christiansen, 2005] that provides a constraint solving approach to language analysis. CHR Grammars are based upon Constraint Handling Rules, CHR [Frühwirth, 1998, 2009......], which is a declarative, high-level programming language for specification and implementation of constraint solvers....

  4. Stability Constraints for Robust Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Amanda G. S. Ottoni

    2015-01-01

    Full Text Available This paper proposes an approach for the robust stabilization of systems controlled by MPC strategies. Uncertain SISO linear systems with box-bounded parametric uncertainties are considered. The proposed approach delivers some constraints on the control inputs which impose sufficient conditions for the convergence of the system output. These stability constraints can be included in the set of constraints dealt with by existing MPC design strategies, in this way leading to the “robustification” of the MPC.

  5. Some cosmological constraints on gauge theories

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1983-01-01

    In these lectures, a review is made of various constraints cosmology may place on gauge theories. Particular emphasis is placed on those constraints obtainable from Big Bang Nucleosynthesis, with only brief mention made of Big Bang Baryosynthesis. There is also a considerable discussion of astrophysical constraints on masses and lifetimes of neutrinos with specific mention of the 'missing mass (light)' problem of galactic dynamics. (orig./HSI)

  6. Generalized Pauli constraints in small atoms

    DEFF Research Database (Denmark)

    Schilling, Christian; Altunbulak, Murat; Knecht, Stefan

    2018-01-01

    investigations have found evidence that these constraints are exactly saturated in several physically relevant systems, e.g., in a certain electronic state of the beryllium atom. It has been suggested that, in such cases, the constraints, rather than the details of the Hamiltonian, dictate the system......'s qualitative behavior. Here, we revisit this question with state-of-the-art numerical methods for small atoms. We find that the constraints are, in fact, not exactly saturated, but that they lie much closer to the surface defined by the constraints than the geometry of the problem would suggest. While...

  7. Production Team Maintenance: Systemic Constraints Impacting Implementation

    National Research Council Canada - National Science Library

    Moore, Terry

    1997-01-01

    .... Identified constraints included: integrating the PTM positioning strategy into the AMC corporate strategic planning process, manpower modeling simulator limitations, labor force authorizations and decentralization...

  8. Review of Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    S. Fukano, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  9. Toward an automaton Constraint for Local Search

    Directory of Open Access Journals (Sweden)

    Jun He

    2009-10-01

    Full Text Available We explore the idea of using finite automata to implement new constraints for local search (this is already a successful technique in constraint-based global search. We show how it is possible to maintain incrementally the violations of a constraint and its decision variables from an automaton that describes a ground checker for that constraint. We establish the practicality of our approach idea on real-life personnel rostering problems, and show that it is competitive with the approach of [Pralong, 2007].

  10. Notes on Timed Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia, Frank D.

    2004-01-01

    and program reactive systems. This note provides a comprehensive introduction to the background for and central notions from the theory of tccp. Furthermore, it surveys recent results on a particular tccp calculus, ntcc, and it provides a classification of the expressive power of various tccp languages.......A constraint is a piece of (partial) information on the values of the variables of a system. Concurrent constraint programming (ccp) is a model of concurrency in which agents (also called processes) interact by telling and asking information (constraints) to and from a shared store (a constraint...

  11. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  12. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  13. Optimal Stopping with Information Constraint

    International Nuclear Information System (INIS)

    Lempa, Jukka

    2012-01-01

    We study the optimal stopping problem proposed by Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002). In this maximization problem of the expected present value of the exercise payoff, the underlying dynamics follow a linear diffusion. The decision maker is not allowed to stop at any time she chooses but rather on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl. Probab. 34:141–157, 2002), solve this problem in the case where the underlying is a geometric Brownian motion and the payoff function is of American call option type. In the current study, we propose a mild set of conditions (covering the setup of Dupuis and Wang in Adv. Appl. Probab. 34:141–157, 2002) on both the underlying and the payoff and build and use a Markovian apparatus based on the Bellman principle of optimality to solve the problem under these conditions. We also discuss the interpretation of this model as optimal timing of an irreversible investment decision under an exogenous information constraint.

  14. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  15. Linear determining equations for differential constraints

    International Nuclear Information System (INIS)

    Kaptsov, O V

    1998-01-01

    A construction of differential constraints compatible with partial differential equations is considered. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the classical determining equations used in the search for admissible Lie operators. As applications of this approach equations of an ideal incompressible fluid and non-linear heat equations are discussed

  16. Optimal Portfolio Choice with Wash Sale Constraints

    DEFF Research Database (Denmark)

    Astrup Jensen, Bjarne; Marekwica, Marcel

    2011-01-01

    We analytically solve the portfolio choice problem in the presence of wash sale constraints in a two-period model with one risky asset. Our results show that wash sale constraints can heavily affect portfolio choice of investors with unrealized losses. The trading behavior of such investors...

  17. Freedom and constraint analysis and optimization

    NARCIS (Netherlands)

    Brouwer, Dannis Michel; Boer, Steven; Aarts, Ronald G.K.M.; Meijaard, Jacob Philippus; Jonker, Jan B.

    2011-01-01

    Many mathematical and intuitive methods for constraint analysis of mechanisms have been proposed. In this article we compare three methods. Method one is based on Grüblers equation. Method two uses an intuitive analysis method based on opening kinematic loops and evaluating the constraints at the

  18. Network Design with Node Degree Balance Constraints

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, Teodor Gabriel

    This presentation discusses an extension to the network design model where there in addition to the flow conservation constraints also are constraints that require design conservation. This means that the number of arcs entering and leaving a node must be the same. As will be shown the model has ...

  19. Constraint solving for direct manipulation of features

    NARCIS (Netherlands)

    Lourenco, D.; Oliveira, P.; Noort, A.; Bidarra, R.

    2006-01-01

    In current commercial feature modeling systems, support for direct manipulation of features is not commonly available. This is partly due to the strong reliance of such systems on constraints, but also to the lack of speed of current constraint solvers. In this paper, an approach to the optimization

  20. A Temporal Concurrent Constraint Programming Calculus

    DEFF Research Database (Denmark)

    Palamidessi, Catuscia; Valencia Posso, Frank Darwin

    2001-01-01

    The tcc model is a formalism for reactive concurrent constraint programming. In this paper we propose a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and non-deterministic timed behavior. We call this tcc extension the ntcc calculus...

  1. Modifier constraints in alkali ultraphosphate glasses

    DEFF Research Database (Denmark)

    Rodrigues, B.P.; Mauro, J.C.; Yue, Yuanzheng

    2014-01-01

    In applying the recently introduced concept of cationic constraint strength [J. Chem. Phys. 140, 214501 (2014)] to bond constraint theory (BCT) of binary phosphate glasses in the ultraphosphate region of xR2O-(1-x)P2O5 (with x ≤ 0.5 and R = {Li, Na, Cs}), we demonstrate that a fundamental limitat...

  2. Specifying Dynamic and Deontic Integrity Constraints

    NARCIS (Netherlands)

    Wieringa, Roelf J.; Meyer, John-Jules; Weigand, Hans

    In the dominant view of knowledge bases (KB's), a KB is a set of facts (atomic sentences) and integrity constraints (IC's). An IC is then a sentence which must at least be consistent with the other sentences in the KB, This view obliterates the distinction between, for example, the constraint that

  3. Solar system constraints on disformal gravity theories

    International Nuclear Information System (INIS)

    Ip, Hiu Yan; Schmidt, Fabian; Sakstein, Jeremy

    2015-01-01

    Disformal theories of gravity are scalar-tensor theories where the scalar couples derivatively to matter via the Jordan frame metric. These models have recently attracted interest in the cosmological context since they admit accelerating solutions. We derive the solution for a static isolated mass in generic disformal gravity theories and transform it into the parameterised post-Newtonian form. This allows us to investigate constraints placed on such theories by local tests of gravity. The tightest constraints come from preferred-frame effects due to the motion of the Solar System with respect to the evolving cosmological background field. The constraints we obtain improve upon the previous solar system constraints by two orders of magnitude, and constrain the scale of the disformal coupling for generic models to ℳ ∼> 100 eV. These constraints render all disformal effects irrelevant for cosmology

  4. Revisiting the simplicity constraints and coherent intertwiners

    International Nuclear Information System (INIS)

    Dupuis, Maite; Livine, Etera R

    2011-01-01

    In the context of loop quantum gravity and spinfoam models, the simplicity constraints are essential in that they allow one to write general relativity as a constrained topological BF theory. In this work, we apply the recently developed U(N) framework for SU(2) intertwiners to the issue of imposing the simplicity constraints to spin network states. More particularly, we focus on solving on individual intertwiners in the 4D Euclidean theory. We review the standard way of solving the simplicity constraints using coherent intertwiners and we explain how these fit within the U(N) framework. Then we show how these constraints can be written as a closed u(N) algebra and we propose a set of U(N) coherent states that solves all the simplicity constraints weakly for an arbitrary Immirzi parameter.

  5. Functional Sites Induce Long-Range Evolutionary Constraints in Enzymes.

    Directory of Open Access Journals (Sweden)

    Benjamin R Jack

    2016-05-01

    Full Text Available Functional residues in proteins tend to be highly conserved over evolutionary time. However, to what extent functional sites impose evolutionary constraints on nearby or even more distant residues is not known. Here, we report pervasive conservation gradients toward catalytic residues in a dataset of 524 distinct enzymes: evolutionary conservation decreases approximately linearly with increasing distance to the nearest catalytic residue in the protein structure. This trend encompasses, on average, 80% of the residues in any enzyme, and it is independent of known structural constraints on protein evolution such as residue packing or solvent accessibility. Further, the trend exists in both monomeric and multimeric enzymes and irrespective of enzyme size and/or location of the active site in the enzyme structure. By contrast, sites in protein-protein interfaces, unlike catalytic residues, are only weakly conserved and induce only minor rate gradients. In aggregate, these observations show that functional sites, and in particular catalytic residues, induce long-range evolutionary constraints in enzymes.

  6. Distance and slope constraints: adaptation and variability in golf putting.

    Science.gov (United States)

    Dias, Gonçalo; Couceiro, Micael S; Barreiros, João; Clemente, Filipe M; Mendes, Rui; Martins, Fernando M

    2014-07-01

    The main objective of this study is to understand the adaptation to external constraints and the effects of variability in a golf putting task. We describe the adaptation of relevant variables of golf putting to the distance to the hole and to the addition of a slope. The sample consisted of 10 adult male (33.80 ± 11.89 years), volunteers, right handed and highly skilled golfers with an average handicap of 10.82. Each player performed 30 putts at distances of 2, 3 and 4 meters (90 trials in Condition 1). The participants also performed 90 trials, at the same distances, with a constraint imposed by a slope (Condition 2). The results indicate that the players change some parameters to adjust to the task constraints, namely the duration of the backswing phase, the speed of the club head and the acceleration at the moment of impact with the ball. The effects of different golf putting distances in the no-slope condition on different kinematic variables suggest a linear adjustment to distance variation that was not observed when in the slope condition.

  7. FPGA Dynamic Power Minimization through Placement and Routing Constraints

    Directory of Open Access Journals (Sweden)

    Deepak Agarwal

    2006-08-01

    Full Text Available Field-programmable gate arrays (FPGAs are pervasive in embedded systems requiring low-power utilization. A novel power optimization methodology for reducing the dynamic power consumed by the routing of FPGA circuits by modifying the constraints applied to existing commercial tool sets is presented. The power optimization techniques influence commercial FPGA Place and Route (PAR tools by translating power goals into standard throughput and placement-based constraints. The Low-Power Intelligent Tool Environment (LITE is presented, which was developed to support the experimentation of power models and power optimization algorithms. The generated constraints seek to implement one of four power optimization approaches: slack minimization, clock tree paring, N-terminal net colocation, and area minimization. In an experimental study, we optimize dynamic power of circuits mapped into 0.12 μm Xilinx Virtex-II FPGAs. Results show that several optimization algorithms can be combined on a single design, and power is reduced by up to 19.4%, with an average power savings of 10.2%.

  8. Few-body hypernuclear constraints

    International Nuclear Information System (INIS)

    Gibson, B.F.

    1993-01-01

    Since the discovery of the first hyperfragment in a balloon flown emulsion stack some two score years ago, physicists have worked to understand how the addition of the strangeness degree of freedom alters the picture of nuclei and the baryon-baryon force. Because the Λ and Σ masses differ markedly from that of the proton and neutron, SU (3) symmetry is broken. How it is broken is a question of importance to the fundamental understanding of the baryon-baryon interaction. New dynamical symmetries, forbidden by the Pauli principle in conventional nuclei, appear. Three-body forces play a more significant role. A binding anomaly in A = 5 as well as a possible spin inversion between ground and excited states in A = 4 appear. Surprisingly narrow structure near the threshold for Σ production has been reported in the 4 He (K - , π - ) spectrum while no corresponding structure is observed in the companion 4 He(K - , π + ) spectrum; this has been interpreted as evidence for a Σ 4 He bound state. Finally, the reported observation of ΛΛ-hypernuclei, in particular ΛΛ 6 He, bears directly upon the possibilities for the prediction of a bound H particle--the S = -2 dibaryon. Although it is not feasible to invert the analysis and determine the interaction from the data on few-body systems, it is possible to utilize these data to constrain the models, provided one is careful. The author will explore briefly the constraints which the few-body data impose and the level of understanding that has been achieved

  9. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  10. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  11. A Survey of Myanmar Rice Production and Constraints

    Directory of Open Access Journals (Sweden)

    T.A.A. Naing

    2008-10-01

    Full Text Available Although modern high yielding varieties were introduced into Myanmar in the early 1980s, the national average of rice grain yield has stagnated at 3.2-3.4 t ha-1. To identify yield constraints, input intensities and the general practices of rice cultivation in Myanmar, a survey was conducted during the wet seasons of 2001 and 2002. A total of 98 farmers from five townships in Upper Myanmar and 16 in Lower Myanmar representing the most important areas of rice production were questioned on their management practices, yields, and perceived yield constraints over the previous four years. There was a recent decrease in the overall average rate of fertilizer application, an increase in the prevalence of rice-legume cropping systems, and only localized insect pest or disease problems. Additionally, rice yields were found to be higher in Upper Myanmar, likely the results of more suitable weather conditions, better irrigation, and ready market access. Furthermore, a number of critical factors affecting production are identified and possible solutions discussed.

  12. Constraint Handling Rules with Binders, Patterns and Generic Quantification

    NARCIS (Netherlands)

    Serrano, Alejandro; Hage, J.

    2017-01-01

    Constraint Handling Rules provide descriptions for constraint solvers. However, they fall short when those constraints specify some binding structure, like higher-rank types in a constraint-based type inference algorithm. In this paper, the term syntax of constraints is replaced by λ-tree syntax, in

  13. Averaged null energy condition and difference inequalities in quantum field theory

    International Nuclear Information System (INIS)

    Yurtsever, U.

    1995-01-01

    For a large class of quantum states, all local (pointwise) energy conditions widely used in relativity are violated by the renormalized stress-energy tensor of a quantum field. In contrast, certain nonlocal positivity constraints on the quantum stress-energy tensor might hold quite generally, and this possibility has received considerable attention in recent years. In particular, it is now known that the averaged null energy condition, the condition that the null-null component of the stress-energy tensor integrated along a complete null geodesic is non-negative for all states, holds quite generally in a wide class of spacetimes for a minimally coupled scalar field. Apart from the specific class of spacetimes considered (mainly two-dimensional spacetimes and four-dimensional Minkowski space), the most significant restriction on this result is that the null geodesic over which the average is taken must be achronal. Recently, Ford and Roman have explored this restriction in two-dimensional flat spacetime, and discovered that in a flat cylindrical space, although the stress energy tensor itself fails to satisfy the averaged null energy condition (ANEC) along the (nonachronal) null geodesics, when the ''Casimir-vacuum'' contribution is subtracted from the stress-energy the resulting tensor does satisfy the ANEC inequality. Ford and Roman name this class of constraints on the quantum stress-energy tensor ''difference inequalities.'' Here I give a proof of the difference inequality for a minimally coupled massless scalar field in an arbitrary (globally hyperbolic) two-dimensional spacetime, using the same techniques as those we relied on to prove the ANEC in an earlier paper with Wald. I begin with an overview of averaged energy conditions in quantum field theory

  14. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. QCD unitarity constraints on Reggeon Field Theory

    Energy Technology Data Exchange (ETDEWEB)

    Kovner, Alex [Physics Department, University of Connecticut,2152 Hillside Road, Storrs, CT 06269 (United States); Levin, Eugene [Departemento de Física, Universidad Técnica Federico Santa María,and Centro Científico-Tecnológico de Valparaíso,Avda. Espana 1680, Casilla 110-V, Valparaíso (Chile); Department of Particle Physics, Tel Aviv University,Tel Aviv 69978 (Israel); Lublinsky, Michael [Physics Department, Ben-Gurion University of the Negev,Beer Sheva 84105 (Israel); Physics Department, University of Connecticut,2152 Hillside Road, Storrs, CT 06269 (United States)

    2016-08-04

    We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun’s Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a “black disk limit' as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.

  16. QCD unitarity constraints on Reggeon Field Theory

    International Nuclear Information System (INIS)

    Kovner, Alex; Levin, Eugene; Lublinsky, Michael

    2016-01-01

    We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun’s Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a “black disk limit' as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.

  17. Liquidity Constraints and Fiscal Stabilization Policy

    DEFF Research Database (Denmark)

    Kristoffersen, Mark Strøm

    It is often claimed that the presence of liquidity constrained households enhances the need for and the effects of fi…scal stabilization policies. This paper studies this in a model of a small open economy with liquidity constrained households. The results show that the consequences of liquidity...... constraints are more complex than previously thought: The optimal stabilization policy in case of productivity shocks is independent of the liquidity constraints, and the presence of liquidity constraints tends to reduce the need for an active policy stabilizing productivity shocks....

  18. Use of dose constraints for occupational exposure

    International Nuclear Information System (INIS)

    Kaijage, Tunu

    2015-02-01

    The use of dose constraints for occupational exposure was reviewed in this project. The role of dose constraints as used in optimization of protection of workers was described. Different issues to be considered in application of the concept and challenges associated with their implementation were also discussed. The situation where dose constraints could be misinterpreted to dose limits is also explained as the two are clearly differentiated by the International Commission of Radiological Protection (ICRP) Publication 103. Moreover, recommendations to all parties responsible for protection and safety of workers were discussed. (au)

  19. Constraint satisfaction problems CSP formalisms and techniques

    CERN Document Server

    Ghedira, Khaled

    2013-01-01

    A Constraint Satisfaction Problem (CSP) consists of a set of variables, a domain of values for each variable and a set of constraints. The objective is to assign a value for each variable such that all constraints are satisfied. CSPs continue to receive increased attention because of both their high complexity and their omnipresence in academic, industrial and even real-life problems. This is why they are the subject of intense research in both artificial intelligence and operations research. This book introduces the classic CSP and details several extensions/improvements of both formalisms a

  20. Expressing Model Constraints Visually with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald

    2011-01-01

    ) for specifying constraints on UML models. We examine VMQL's usability by controlled experiments and its expressiveness by a representative sample. We conclude that VMQL is less expressive than OCL, although expressive enough for most of the constraints in the sample. In terms of usability, however, VMQL......OCL is the de facto standard language for expressing constraints and queries on UML models. However, OCL expressions are very difficult to create, understand, and maintain, even with the sophisticated tool support now available. In this paper, we propose to use the Visual Model Query Language (VMQL...

  1. Dose constraints, what are they now?

    International Nuclear Information System (INIS)

    Lazo, T.

    2005-01-01

    The concept of a source-related dose constraint was first introduced in ICPR publication 60. The idea was to provide a number that individual exposures from a single, specific source should not exceed, and below which optimisation of protection should take place. Dose constraints were applied to occupational and public exposures from practices. In order to simplify and clarify the ICRP's recommendations, the latest draft, RPO5, presents dose constraints again, and with the same meaning as in publication 60. However, the dose constraints are now applied in all situations, not just practices. This new approach does provide simplification, in that a single concept is applied to all types of exposures (normal situations, accident situations, and existing situations). However, the approach and numerical values that are selected by regulatory authorities for the application of the concept, particularly in normal situations which are also subject to dose limits, will be crucial to the implementation of the system of radiological protection. (author)

  2. Biological constraints do not entail cognitive closure.

    Science.gov (United States)

    Vlerick, Michael

    2014-12-01

    From the premise that our biology imposes cognitive constraints on our epistemic activities, a series of prominent authors--most notably Fodor, Chomsky and McGinn--have argued that we are cognitively closed to certain aspects and properties of the world. Cognitive constraints, they argue, entail cognitive closure. I argue that this is not the case. More precisely, I detect two unwarranted conflations at the core of arguments deriving closure from constraints. The first is a conflation of what I will refer to as 'representation' and 'object of representation'. The second confuses the cognitive scope of the assisted mind for that of the unassisted mind. Cognitive closure, I conclude, cannot be established from pointing out the (uncontroversial) existence of cognitive constraints. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. institutional and resource constraints that inhibit contractor ...

    African Journals Online (AJOL)

    p2333147

    Keywords: Institutions; small-scale contractor performance; sugar industry. ABSTRACT ..... diverse cultural settings, women, specifically widowed or single women, have a .... constraints on business growth, such as the work limitations placed.

  4. Constraint theory multidimensional mathematical model management

    CERN Document Server

    Friedman, George J

    2017-01-01

    Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...

  5. Route constraints model based on polychromatic sets

    Science.gov (United States)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  6. Constraint-based Attribute and Interval Planning

    Science.gov (United States)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  7. Automated constraint placement to maintain pile shape

    KAUST Repository

    Hsu, Shu-Wei; Keyser, John

    2012-01-01

    structure. Next, for stabilizing the structure, we pick suitable objects from those passing the equilibrium analysis and then restrict their DOFs by managing the insertion of constraints on them. The method is suitable for controlling stacking behavior

  8. Cosmological constraints on Brans-Dicke theory.

    Science.gov (United States)

    Avilez, A; Skordis, C

    2014-07-04

    We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981theory and are valid for any Horndeski theory, the most general second-order scalar-tensor theory, which approximates the BD theory on cosmological scales. In this sense, our constraints place strong limits on possible modifications of gravity that might explain cosmic acceleration.

  9. CONSTRAINTS TO USE OF MOBILE TELEPHONY FOR ...

    African Journals Online (AJOL)

    Key words: Constraints, mobile telephony, frequency, farmers and telecommunications service ... efficient sharing of agricultural information ... calls on the mobile phone without the need .... adequate training on the use of mobile .... Job Market.

  10. Modernizing China's Military: Opportunities and Constraints

    National Research Council Canada - National Science Library

    Crane, Keith; Cliff, Roger; Medeiros, Evan; Mulvenon, James; Overholt, William

    2005-01-01

    The purpose of this study is to assess future resource constraints on, and potential domestic economic and industrial contributions to, the ability of the Chinese military to become a significant threat to U.S. forces by 2025...

  11. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  12. Optimal capital stock and financing constraints

    OpenAIRE

    Saltari, Enrico; Giuseppe, Travaglini

    2011-01-01

    In this paper we show that financing constraints affect the optimal level of capital stock even when the financing constraint is ineffective. This happens when the firm rationally anticipates that access to external financing resources may be rationed in the future. We will show that with these expectations, the optimal investment policy is to invest less in any given period, thereby lowering the desired optimal capital stock in the long run.

  13. Credit Constraints, Political Instability, and Capital Accumulation

    OpenAIRE

    Risto Herrala; Rima Turk-Ariss

    2013-01-01

    We investigate the complex interactions between credit constraints, political instability, and capital accumulation using a novel approach based on Kiyotaki and Moore’s (1997) theoretical framework. Drawing on a unique firm-level data set from Middle-East and North Africa (MENA), empirical findings point to a large and significant effect of credit conditions on capital accumulation and suggest that continued political unrest worsens credit constraints. The results support the view that financ...

  14. Cyclic labellings with constraints at two distances

    OpenAIRE

    Leese, R; Noble, S D

    2004-01-01

    Motivated by problems in radio channel assignment, we consider the vertex-labelling of graphs with non-negative integers. The objective is to minimise the span of the labelling, subject to constraints imposed at graph distances one and two. We show that the minimum span is (up to rounding) a piecewise linear function of the constraints, and give a complete specification, together with associated optimal assignments, for trees and cycles.

  15. Portfolios with nonlinear constraints and spin glasses

    Science.gov (United States)

    Gábor, Adrienn; Kondor, I.

    1999-12-01

    In a recent paper Galluccio, Bouchaud and Potters demonstrated that a certain portfolio problem with a nonlinear constraint maps exactly onto finding the ground states of a long-range spin glass, with the concomitant nonuniqueness and instability of the optimal portfolios. Here we put forward geometric arguments that lead to qualitatively similar conclusions, without recourse to the methods of spin glass theory, and give two more examples of portfolio problems with convex nonlinear constraints.

  16. Future Cosmological Constraints From Fast Radio Bursts

    Science.gov (United States)

    Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus

    2018-03-01

    We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.

  17. Generalized Pauli constraints in small atoms

    Science.gov (United States)

    Schilling, Christian; Altunbulak, Murat; Knecht, Stefan; Lopes, Alexandre; Whitfield, James D.; Christandl, Matthias; Gross, David; Reiher, Markus

    2018-05-01

    The natural occupation numbers of fermionic systems are subject to nontrivial constraints, which include and extend the original Pauli principle. A recent mathematical breakthrough has clarified their mathematical structure and has opened up the possibility of a systematic analysis. Early investigations have found evidence that these constraints are exactly saturated in several physically relevant systems, e.g., in a certain electronic state of the beryllium atom. It has been suggested that, in such cases, the constraints, rather than the details of the Hamiltonian, dictate the system's qualitative behavior. Here, we revisit this question with state-of-the-art numerical methods for small atoms. We find that the constraints are, in fact, not exactly saturated, but that they lie much closer to the surface defined by the constraints than the geometry of the problem would suggest. While the results seem incompatible with the statement that the generalized Pauli constraints drive the behavior of these systems, they suggest that the qualitatively correct wave-function expansions can in some systems already be obtained on the basis of a limited number of Slater determinants, which is in line with numerical evidence from quantum chemistry.

  18. University Course Timetabling using Constraint Programming

    Directory of Open Access Journals (Sweden)

    Hadi Shahmoradi

    2017-03-01

    Full Text Available University course timetabling problem is a challenging and time-consuming task on the overall structure of timetable in every academic environment. The problem deals with many factors such as the number of lessons, classes, teachers, students and working time, and these are influenced by some hard and soft constraints. The aim of solving this problem is to assign courses and classes to teachers and students, so that the restrictions are held. In this paper, a constraint programming method is proposed to satisfy maximum constraints and expectation, in order to address university timetabling problem. For minimizing the penalty of soft constraints, a cost function is introduced and AHP method is used for calculating its coefficients. The proposed model is tested on department of management, University of Isfahan dataset using OPL on the IBM ILOG CPLEX Optimization Studio platform. A statistical analysis has been conducted and shows the performance of the proposed approach in satisfying all hard constraints and also the satisfying degree of the soft constraints is on maximum desirable level. The running time of the model is less than 20 minutes that is significantly better than the non-automated ones.

  19. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  20. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  1. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  2. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  3. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  4. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  5. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  6. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  7. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  8. Reduction Of Constraints For Coupled Operations

    International Nuclear Information System (INIS)

    Raszewski, F.; Edwards, T.

    2009-01-01

    The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated much of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al 2 O 3 (ge) 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% (ΣM 2 O 2 O 3 constraint to 4 wt% (Al 2 O 3 (ge) 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al 2 O 3 and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of

  9. Two-agent scheduling in open shops subject to machine availability and eligibility constraints

    Directory of Open Access Journals (Sweden)

    Ling-Huey Su

    2015-09-01

    Full Text Available Purpose: The aims of this article are to develop a new mathematical formulation and a new heuristic for the problem of preemptive two-agent scheduling in open shops subject to machine maintenance and eligibility constraints. Design/methodology: Using the ideas of minimum cost flow network and constraint programming, a heuristic and a network based linear programming are proposed to solve the problem. Findings: Computational experiments show that the heuristic generates a good quality schedule with a deviation of 0.25% on average from the optimum and the network based linear programming model can solve problems up to 110 jobs combined with 10 machines without considering the constraint that each operation can be processed on at most one machine at a time. In order to satisfy this constraint, a time consuming Constraint Programming is proposed. For n = 80 and m = 10, the average execution time for the combined models (linear programming model combined with Constraint programming exceeds two hours. Therefore, the heuristic algorithm we developed is very efficient and is in need. Practical implications: Its practical implication occurs in TFT-LCD and E-paper manufacturing wherein units go through a series of diagnostic tests that do not have to be performed in any specified order. Originality/value: The main contribution of the article is to split the time horizon into many time intervals and use the dispatching rule for each time interval in the heuristic algorithm, and also to combine the minimum cost flow network with the Constraint Programming to solve the problem optimally. 

  10. Constraint-Muse: A Soft-Constraint Based System for Music Therapy

    Science.gov (United States)

    Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin

    Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.

  11. Trade-off between multiple constraints enables simultaneous formation of modules and hubs in neural systems.

    Directory of Open Access Journals (Sweden)

    Yuhan Chen

    Full Text Available The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter α, and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of α, resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of α values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real

  12. Fuzzy Constraint-Based Agent Negotiation

    Institute of Scientific and Technical Information of China (English)

    Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu

    2005-01-01

    Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.

  13. Pair Production Constraints on Superluminal Neutrinos Revisited

    International Nuclear Information System (INIS)

    Brodsky, Stanley

    2012-01-01

    We revisit the pair creation constraint on superluminal neutrinos considered by Cohen and Glashow in order to clarify which types of superluminal models are constrained. We show that a model in which the superluminal neutrino is effectively light-like can evade the Cohen-Glashow constraint. In summary, any model for which the CG pair production process operates is excluded because such timelike neutrinos would not be detected by OPERA or other experiments. However, a superluminal neutrino which is effectively lightlike with fixed p 2 can evade the Cohen-Glashow constraint because of energy-momentum conservation. The coincidence involved in explaining the SN1987A constraint certainly makes such a picture improbable - but it is still intrinsically possible. The lightlike model is appealing in that it does not violate Lorentz symmetry in particle interactions, although one would expect Hughes-Drever tests to turn up a violation eventually. Other evasions of the CG constraints are also possible; perhaps, e.g., the neutrino takes a 'short cut' through extra dimensions or suffers anomalous acceleration in matter. Irrespective of the OPERA result, Lorentz-violating interactions remain possible, and ongoing experimental investigation of such possibilities should continue.

  14. Diffusion Processes Satisfying a Conservation Law Constraint

    Directory of Open Access Journals (Sweden)

    J. Bakosi

    2014-01-01

    Full Text Available We investigate coupled stochastic differential equations governing N nonnegative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires a set of fluctuating variables to be nonnegative and (if appropriately normalized sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the nonnegativity and the unit-sum conservation law constraints are satisfied as the variables evolve in time. We investigate the consequences of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.

  15. Faddeev-Jackiw quantization and constraints

    International Nuclear Information System (INIS)

    Barcelos-Neto, J.; Wotzasek, C.

    1992-01-01

    In a recent Letter, Faddeev and Jackiw have shown that the reduction of constrained systems into its canonical, first-order form, can bring some new insight into the research of this field. For sympletic manifolds the geometrical structure, called Dirac or generalized bracket, is obtained directly from the inverse of the nonsingular sympletic two-form matrix. In the cases of nonsympletic manifolds, this two-form is degenerated and cannot be inverted to provide the generalized brackets. This singular behavior of the sympletic matrix is indicative of the presence of constraints that have to be carefully considered to yield to consistent results. One has two possible routes to treat this problem: Dirac has taught us how to implement the constraints into the potential part (Hamiltonian) of the canonical Lagrangian, leading to the well-known Dirac brackets, which are consistent with the constraints and can be mapped into quantum commutators (modulo ordering terms). The second route, suggested by Faddeev and Jackiw, and followed in this paper, is to implement the constraints directly into the canonical part of the first order Lagrangian, using the fact that the consistence condition for the stability of the constrained manifold is linear in the time derivative. This algorithm may lead to an invertible two-form sympletic matrix from where the Dirac brackets are readily obtained. This algorithm is used in this paper to investigate some aspects of the quantization of constrained systems with first- and second-class constraints in the sympletic approach

  16. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  17. Latin hypercube sampling with inequality constraints

    International Nuclear Information System (INIS)

    Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.

    2010-01-01

    In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)

  18. Lorentz violation. Motivation and new constraints

    International Nuclear Information System (INIS)

    Liberati, S.; Maccione, L.

    2009-09-01

    We review the main theoretical motivations and observational constraints on Planck scale sup-pressed violations of Lorentz invariance. After introducing the problems related to the phenomenological study of quantum gravitational effects, we discuss the main theoretical frameworks within which possible departures from Lorentz invariance can be described. In particular, we focus on the framework of Effective Field Theory, describing several possible ways of including Lorentz violation therein and discussing their theoretical viability. We review the main low energy effects that are expected in this framework. We discuss the current observational constraints on such a framework, focusing on those achievable through high-energy astrophysics observations. In this context we present a summary of the most recent and strongest constraints on QED with Lorentz violating non-renormalizable operators. Finally, we discuss the present status of the field and its future perspectives. (orig.)

  19. WMAP constraints on the Cardassian model

    International Nuclear Information System (INIS)

    Sen, A.A.; Sen, S.

    2003-01-01

    We investigate the constraints on the Cardassian model using the recent results from the Wilkinson microwave anisotropy probe for the locations of the peaks of the cosmic microwave background (CMB) anisotropy spectrum. We find that the model is consistent with the recent observational data for a certain range of the model parameter n and the cosmological parameters. We find that the Cardassian model is favored compared to the ΛCDM model for a higher spectral index (n s ≅1) together with a lower value of the Hubble parameter h (h≤0.71). But for smaller values of n s , both ΛCDM and Cardassian models are equally favored. Also, irrespective of supernova constraints, CMB data alone predict the current acceleration of the Universe in this model. We have also studied the constraint on σ 8 , the rms density fluctuations at the 8h -1 Mpc scale

  20. Some general constraints on identical band symmetries

    International Nuclear Information System (INIS)

    Guidry, M.W.; Strayer, M.R.; Wu, C.; Feng, D.H.

    1993-01-01

    We argue on general grounds that nearly identical bands observed for superdeformation and less frequently for normal deformation must be explicable in terms of a symmetry having a microscopic basis. We assume that the unknown symmetry is associated with a Lie algebra generated by terms bilinear in fermion creation and annihilation operators. Observed features of these bands and the general properties of Lie groups are then used to place constraints on acceptable algebras. Additional constraints are placed by assuming that the collective spectrum is associated with a dynamical symmetry, and examining the subgroup structure required by phenomenology. We observe that requisite symmetry cannot be unitary, and that the simplest known group structures consistent with these minimal criteria are associated with the Ginocchio algebras employed in the fermion dynamical symmetry model. However, our arguments are general in nature, and we propose that they imply model-independent constraints on any candidate explanation for identical bands

  1. Lorentz violation. Motivation and new constraints

    Energy Technology Data Exchange (ETDEWEB)

    Liberati, S. [Scuola Internazionale Superiore di Studi Avanzati SISSA, Trieste (Italy); Istituto Nazionale di Fisica Nucleare INFN, Sezione di Trieste (Italy); Maccione, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2009-09-15

    We review the main theoretical motivations and observational constraints on Planck scale sup-pressed violations of Lorentz invariance. After introducing the problems related to the phenomenological study of quantum gravitational effects, we discuss the main theoretical frameworks within which possible departures from Lorentz invariance can be described. In particular, we focus on the framework of Effective Field Theory, describing several possible ways of including Lorentz violation therein and discussing their theoretical viability. We review the main low energy effects that are expected in this framework. We discuss the current observational constraints on such a framework, focusing on those achievable through high-energy astrophysics observations. In this context we present a summary of the most recent and strongest constraints on QED with Lorentz violating non-renormalizable operators. Finally, we discuss the present status of the field and its future perspectives. (orig.)

  2. Constraints and spandrels of interareal connectomes

    Science.gov (United States)

    Rubinov, Mikail

    2016-12-01

    Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls.

  3. Ring power balance observing plasma stability constraints

    International Nuclear Information System (INIS)

    Campbell, R.B.; Logan, B.G.

    1982-01-01

    Ring power balance is performed for an E-ring stabilized tandem mirror reactor, taking into account constraints imposed by plasma stability. The two most important criteria are the stability of the core interchange and hot electron interchange modes. The former determines the ring thickness, the latter determines the minimum hot electron temperature; both quantities are important for power balance. The combination of the hot electron interchange constraint and the fact that the barrier density is low places the operating point on the synchrotron dominated branch of power balance. The reference case considered here requires a reasonable 34 MW of heating power deposited in the rings. We also have examined the sensitivity of the required ring power on uncertainties in the numerical coefficients of the stability constraints. We have found that the heating power is strongly affected

  4. Effective constraint algebras with structure functions

    International Nuclear Information System (INIS)

    Bojowald, Martin; Brahma, Suddhasattwa

    2016-01-01

    This article presents the result that fluctuations and higher moments of a state, by themselves, do not imply quantum corrections in structure functions of constrained systems. Moment corrections are isolated from other types of quantum effects, such as factor-ordering choices and regularization, by introducing a new condition with two parts: (i) having a direct (or faithful) quantization of the classical structure functions, (ii) free of factor-ordering ambiguities. In particular, it is assumed that the classical constraints can be quantized in an anomaly free way, so that properties of the resulting constraint algebras can be derived. If the two-part condition is not satisfied, effective constraints can still be evaluated, but quantum effects may be stronger. Consequences for canonical quantum gravity, whose structure functions encode space–time structure, are discussed. In particular, deformed algebras found in models of loop quantum gravity provide reliable information even in the Planck regime. (paper)

  5. Managing Constraint Generators in Retail Design Processes

    DEFF Research Database (Denmark)

    Münster, Mia Borch; Haug, Anders

    case studies of fashion store design projects, the present paper addresses this gap. The and six case studies of fashion store design projects, the present paper sheds light on the types of constraints generated by the relevant constraint generators. The paper shows that in the cases studied......Retail design concepts are complex designs meeting functional and aesthetic demands. During a design process a retail designer has to consider various constraint generators such as stakeholder interests, physical limitations and restrictions. Obviously the architectural site, legislators...... and landlords need to be considered as well as the interest of the client and brand owner. Furthermore the users need to be taken into account in order to develop an interesting and functional shopping and working environments. Finally, suppliers and competitors may influence the design with regard...

  6. Molecular dynamics simulations on PGLa using NMR orientational constraints

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, Ulrich, E-mail: ulrich.sternberg@partner.kit.edu; Witter, Raiker [Tallinn University of Technology, Technomedicum (Estonia)

    2015-11-15

    NMR data obtained by solid state NMR from anisotropic samples are used as orientational constraints in molecular dynamics simulations for determining the structure and dynamics of the PGLa peptide within a membrane environment. For the simulation the recently developed molecular dynamics with orientational constraints technique (MDOC) is used. This method introduces orientation dependent pseudo-forces into the COSMOS-NMR force field. Acting during a molecular dynamics simulation these forces drive molecular rotations, re-orientations and folding in such a way that the motional time-averages of the tensorial NMR properties are consistent with the experimentally measured NMR parameters. This MDOC strategy does not depend on the initial choice of atomic coordinates, and is in principle suitable for any flexible and mobile kind of molecule; and it is of course possible to account for flexible parts of peptides or their side-chains. MDOC has been applied to the antimicrobial peptide PGLa and a related dimer model. With these simulations it was possible to reproduce most NMR parameters within the experimental error bounds. The alignment, conformation and order parameters of the membrane-bound molecule and its dimer were directly derived with MDOC from the NMR data. Furthermore, this new approach yielded for the first time the distribution of segmental orientations with respect to the membrane and the order parameter tensors of the dimer systems. It was demonstrated the deuterium splittings measured at the peptide to lipid ratio of 1/50 are consistent with a membrane spanning orientation of the peptide.

  7. Constraints on food chain length arising from regional metacommunity dynamics

    Science.gov (United States)

    Calcagno, Vincent; Massol, François; Mouquet, Nicolas; Jarne, Philippe; David, Patrice

    2011-01-01

    Classical ecological theory has proposed several determinants of food chain length, but the role of metacommunity dynamics has not yet been fully considered. By modelling patchy predator–prey metacommunities with extinction–colonization dynamics, we identify two distinct constraints on food chain length. First, finite colonization rates limit predator occupancy to a subset of prey-occupied sites. Second, intrinsic extinction rates accumulate along trophic chains. We show how both processes concur to decrease maximal and average food chain length in metacommunities. This decrease is mitigated if predators track their prey during colonization (habitat selection) and can be reinforced by top-down control of prey vital rates (especially extinction). Moreover, top-down control of colonization and habitat selection can interact to produce a counterintuitive positive relationship between perturbation rate and food chain length. Our results show how novel limits to food chain length emerge in spatially structured communities. We discuss the connections between these constraints and the ones commonly discussed, and suggest ways to test for metacommunity effects in food webs. PMID:21367786

  8. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  9. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  10. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  11. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  12. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  13. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  14. Judgement of Design Scheme Based on Flexible Constraint in ICAD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The conception of flexible constraint is proposed in the paper. The solution of flexible constraint is in special range, and maybe different in different instances of same design scheme. The paper emphasis on how to evaluate and optimize a design scheme with flexible constraints based on the satisfaction degree function defined on flexible constraints. The conception of flexible constraint is used to solve constraint conflict and design optimization in complicated constraint-based assembly design by the PFM parametrization assembly design system. An instance of gear-box design is used for verifying optimization method.

  15. q-Virasoro constraints in matrix models

    Energy Technology Data Exchange (ETDEWEB)

    Nedelin, Anton [Dipartimento di Fisica, Università di Milano-Bicocca and INFN, sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden); Zabzine, Maxim [Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden)

    2017-03-20

    The Virasoro constraints play the important role in the study of matrix models and in understanding of the relation between matrix models and CFTs. Recently the localization calculations in supersymmetric gauge theories produced new families of matrix models and we have very limited knowledge about these matrix models. We concentrate on elliptic generalization of hermitian matrix model which corresponds to calculation of partition function on S{sup 3}×S{sup 1} for vector multiplet. We derive the q-Virasoro constraints for this matrix model. We also observe some interesting algebraic properties of the q-Virasoro algebra.

  16. Constraints on reusability of learning objects

    DEFF Research Database (Denmark)

    May, Michael; Hussmann, Peter Munkebo; Jensen, Anne Skov

    2010-01-01

    It is the aim of this paper to discuss some didactic constraints on the use and reuse of digital modular learning objects. Engineering education is used as the specific context of use with examples from courses in introductory electronics and mathematics. Digital multimedia and modular learning....... Constraints on reuse arise from the nature of conceptual understanding in higher education and the functionality of learning objects within present technologies. We will need didactic as well as technical perspectives on learning objects in designing for understanding....

  17. Constraints on hadronically decaying dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Garny, Mathias [Technische Univ. Muenchen, Garching (Germany). Physik-Department; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ibarra, Alejandro [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Tran, David [Technische Univ. Muenchen, Garching (Germany). Physik-Department; Minnesota Univ., Minneapolis, MN (United States). School of Physics and Astronomy

    2012-05-15

    We present general constraints on dark matter stability in hadronic decay channels derived from measurements of cosmic-ray antiprotons.We analyze various hadronic decay modes in a model-independent manner by examining the lowest-order decays allowed by gauge and Lorentz invariance for scalar and fermionic dark matter particles and present the corresponding lower bounds on the partial decay lifetimes in those channels. We also investigate the complementarity between hadronic and gamma-ray constraints derived from searches for monochromatic lines in the sky, which can be produced at the quantum level if the dark matter decays into quark-antiquark pairs at leading order.

  18. Constraints on hadronically decaying dark matter

    International Nuclear Information System (INIS)

    Garny, Mathias; Ibarra, Alejandro; Tran, David; Minnesota Univ., Minneapolis, MN

    2012-05-01

    We present general constraints on dark matter stability in hadronic decay channels derived from measurements of cosmic-ray antiprotons.We analyze various hadronic decay modes in a model-independent manner by examining the lowest-order decays allowed by gauge and Lorentz invariance for scalar and fermionic dark matter particles and present the corresponding lower bounds on the partial decay lifetimes in those channels. We also investigate the complementarity between hadronic and gamma-ray constraints derived from searches for monochromatic lines in the sky, which can be produced at the quantum level if the dark matter decays into quark-antiquark pairs at leading order.

  19. Orthology and paralogy constraints: satisfiability and consistency

    OpenAIRE

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    Background A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family   G . But is a given set   C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for   G ? While previous studies have focused on full sets of constraints, here we consider the general case where   C does not necessarily involve a ...

  20. What's in a country average? Wealth, gender, and regional inequalities in immunization in India.

    Science.gov (United States)

    Pande, Rohini P; Yazbeck, Abdo S

    2003-12-01

    Recent attention to Millennium Development Goals by the international development community has led to the formation of targets to measure country-level achievements, including achievements on health status indicators such as childhood immunization. Using the example of immunization in India, this paper demonstrates the importance of disaggregating national averages for a better understanding of social disparities in health. Specifically, the paper uses data from the India National Family Health Survey 1992-93 to analyze socioeconomic, gender, urban-rural and regional inequalities in immunization in India for each of the 17 largest states. Results show that, on average, southern states have better immunization levels and lower immunization inequalities than many northern states. Wealth and regional inequalities are correlated with overall levels of immunization in a non-linear fashion. Gender inequalities persist in most states, including in the south, and seem unrelated to overall immunization or the levels of other inequalities measured here. This suggests that the gender differentials reflect deep-seated societal factors rather than health system issues per se. The disaggregated information and analysis used in this paper allows for setting more meaningful targets than country averages. Additionally, it helps policy makers and planners to understand programmatic constraints and needs by identifying disparities between sub-groups of the population, including strong and weak performers at the state and regional levels.

  1. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    KAUST Repository

    Oh, Duk-Soon; Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.

    2017-01-01

    A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

  2. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    KAUST Repository

    Oh, Duk-Soon

    2017-06-13

    A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

  3. Optimal power allocation of a sensor node under different rate constraints

    KAUST Repository

    Ayala Solares, Jose Roberto

    2012-06-01

    The optimal transmit power of a sensor node while satisfying different rate constraints is derived. First, an optimization problem with an instantaneous transmission rate constraint is addressed. Next, the optimal power is analyzed, but now with an average transmission rate constraint. The optimal solution for a class of fading channels, in terms of system parameters, is presented and a suboptimal solution is also proposed for an easier, yet efficient, implementation. Insightful asymptotical analysis for both schemes, considering a Rayleigh fading channel, are shown. Finally, the optimal power allocation for a sensor node in a cognitive radio environment is analyzed where an optimum solution for a class of fading channels is again derived. In all cases, numerical results are provided for either Rayleigh or Nakagami-m fading channels. © 2012 IEEE.

  4. Distance measurements from supernovae and dark energy constraints

    International Nuclear Information System (INIS)

    Wang Yun

    2009-01-01

    Constraints on dark energy from current observational data are sensitive to how distances are measured from Type Ia supernova (SN Ia) data. We find that flux averaging of SNe Ia can be used to test the presence of unknown systematic uncertainties, and yield more robust distance measurements from SNe Ia. We have applied this approach to the nearby+SDSS+ESSENCE+SNLS+HST set of 288 SNe Ia, and the 'Constitution' set of 397 SNe Ia. Combining the SN Ia data with cosmic microwave background anisotropy data from Wilkinson Microwave Anisotropy Probe 5 yr observations, the Sloan Digital Sky Survey baryon acoustic oscillation measurements, the data of 69 gamma-ray bursts (GRBs) , and the Hubble constant measurement from the Hubble Space Telescope project SHOES, we measure the dark energy density function X(z)≡ρ X (z)/ρ X (0) as a free function of redshift (assumed to be a constant at z>1 or z>1.5). Without the flux averaging of SNe Ia, the combined data using the Constitution set of SNe Ia seem to indicate a deviation from a cosmological constant at ∼95% confidence level at 0 98% confidence level for z≤0.75 using the combined data with 288 SNe Ia from nearby+SDSS+ESSENCE+SNLS+HST, independent of the assumptions about X(z≥1). We quantify dark energy constraints without assuming a flat Universe using the dark energy figure of merit for both X(z) and a dark energy equation-of-state linear in the cosmic scale factor.

  5. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  6. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  7. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  8. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  9. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  10. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  11. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  12. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  13. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  14. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  15. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  16. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  17. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  18. Vandenberg Air Force Base Upper Level Wind Launch Weather Constraints

    Science.gov (United States)

    Shafer, Jaclyn A.; Wheeler, Mark M.

    2012-01-01

    The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman III ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The maximum wind speed and 1000-ft shear values for each sounding in each subseason were determined. To accurately calculate the PoV, the AMU determined the theoretical distributions that best fit the maximum wind speed and maximum shear datasets. Ultimately it was discovered that the maximum wind speeds follow a Gaussian distribution while the maximum shear values follow a lognormal distribution. These results were applied when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition to the requirements outlined in the original task plan, the AMU also included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on day of launch. The interactive graphical user interface (GUI) for this project was developed in

  19. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  20. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  1. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  2. Reduction of Constraints: Applicability of the Homogeneity Constraint for Macrobatch 3

    International Nuclear Information System (INIS)

    Peeler, D.K.

    2001-01-01

    The Product Composition Control System (PCCS) is used to determine the acceptability of each batch of Defense Waste Processing Facility (DWPF) melter feed in the Slurry Mix Evaporator (SME). This control system imposes several constraints on the composition of the contents of the SME to define acceptability. These constraints relate process or product properties to composition via prediction models. A SME batch is deemed acceptable if its sample composition measurements lead to acceptable property predictions after accounting for modeling, measurement and analytic uncertainties. The baseline document guiding the use of these data and models is ''SME Acceptability Determination for DWPF Process Control (U)'' by Brown and Postles [1996]. A minimum of three PCCS constraints support the prediction of the glass durability from a given SME batch. The Savannah River Technology Center (SRTC) is reviewing all of the PCCS constraints associated with durability. The purpose of this review is to revisit these constraints in light of the additional knowledge gained since the beginning of radioactive operations at DWPF and to identify any supplemental studies needed to amplify this knowledge so that redundant or overly conservative constraints can be eliminated or replaced by more appropriate constraints

  3. Loosening Psychometric Constraints on Educational Assessments

    Science.gov (United States)

    Kane, Michael T.

    2017-01-01

    In response to an argument by Baird, Andrich, Hopfenbeck and Stobart (2017), Michael Kane states that there needs to be a better fit between educational assessment and learning theory. In line with this goal, Kane will examine how psychometric constraints might be loosened by relaxing some psychometric "rules" in some assessment…

  4. Fish farmers' perceptions of constraints affecting aquaculture ...

    African Journals Online (AJOL)

    The study focused on fish farmers' perceptions of constraints affecting aquaculture development in Akwa-Ibom State of Nigeria. Random sampling procedure was used to select 120 respondents from whom primary data was collected. Data analysis was with the aid of descriptive statistics. Results show that fish farming ...

  5. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2015-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the advantages of confluence, which include various optimization techniques and simplified...

  6. Domain general constraints on statistical learning.

    Science.gov (United States)

    Thiessen, Erik D

    2011-01-01

    All theories of language development suggest that learning is constrained. However, theories differ on whether these constraints arise from language-specific processes or have domain-general origins such as the characteristics of human perception and information processing. The current experiments explored constraints on statistical learning of patterns, such as the phonotactic patterns of an infants' native language. Infants in these experiments were presented with a visual analog of a phonotactic learning task used by J. R. Saffran and E. D. Thiessen (2003). Saffran and Thiessen found that infants' phonotactic learning was constrained such that some patterns were learned more easily than other patterns. The current results indicate that infants' learning of visual patterns shows the same constraints as infants' learning of phonotactic patterns. This is consistent with theories suggesting that constraints arise from domain-general sources and, as such, should operate over many kinds of stimuli in addition to linguistic stimuli. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc.

  7. Constraint-Referenced Analytics of Algebra Learning

    Science.gov (United States)

    Sutherland, Scot M.; White, Tobin F.

    2016-01-01

    The development of the constraint-referenced analytics tool for monitoring algebra learning activities presented here came from the desire to firstly, take a more quantitative look at student responses in collaborative algebra activities, and secondly, to situate those activities in a more traditional introductory algebra setting focusing on…

  8. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  9. Prospects and Constraints of Household Irrigation Practices ...

    African Journals Online (AJOL)

    Constraints and prospects of hand dug wells related to household irrigation were assessed in Hayelom watershed (~1045 ha), by evaluating groundwater suitability for irrigation, soil quality and impact of intervention. 181 hand dug wells have come into existence in the watershed due to intervention and benefiting about ...

  10. Primordial black holes survive SN lensing constraints

    Science.gov (United States)

    García-Bellido, Juan; Clesse, Sébastien; Fleury, Pierre

    2018-06-01

    It has been claimed in [arxiv:1712.02240] that massive primordial black holes (PBH) cannot constitute all of the dark matter (DM), because their gravitational-lensing imprint on the Hubble diagram of type Ia supernovae (SN) would be incompatible with present observations. In this note, we critically review those constraints and find several caveats on the analysis. First of all, the constraints on the fraction α of PBH in matter seem to be driven by a very restrictive choice of priors on the cosmological parameters. In particular, the degeneracy between Ωm and α was ignored and thus, by fixing Ωm, transferred the constraining power of SN magnitudes to α. Furthermore, by considering more realistic physical sizes for the type-Ia supernovae, we find an effect on the SN lensing magnification distribution that leads to significantly looser constraints. Moreover, considering a wide mass spectrum of PBH, such as a lognormal distribution, further softens the constraints from SN lensing. Finally, we find that the fraction of PBH that could constitute DM today is bounded by fPBH < 1 . 09(1 . 38) , for JLA (Union 2.1) catalogs, and thus it is perfectly compatible with an all-PBH dark matter scenario in the LIGO band.

  11. MatrixPlot: visualizing sequence constraints

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Stærfeldt, Hans Henrik; Lund, Ole

    1999-01-01

    MatrixPlot: visualizing sequence constraints. Sub-title Abstract Summary : MatrixPlot is a program for making high-quality matrix plots, such as mutual information plots of sequence alignments and distance matrices of sequences with known three-dimensional coordinates. The user can add information...

  12. Constraint-induced movement therapy after stroke

    NARCIS (Netherlands)

    Kwakkel, G.; Veerbeek, J.M.; van Wegen, E.E.H.; Wolf, S.L.

    2015-01-01

    Constraint-induced movement therapy (CIMT) was developed to overcome upper limb impairments after stroke and is the most investigated intervention for the rehabilitation of patients. Original CIMT includes constraining of the non-paretic arm and task-oriented training. Modified versions also apply

  13. Institutional and resource constraints that inhibit contractor ...

    African Journals Online (AJOL)

    Results show that contractors face institutional constraints (work allocation limitations, lack of performance incentives and high transaction costs, such as negotiation costs, the risk of a loss in work and contract default risk), cash flow problems, poor physical infrastructure and a lack of labour. It is expected that the promotion ...

  14. Sustainability constraints on UK bioenergy development

    International Nuclear Information System (INIS)

    Thornley, Patricia; Upham, Paul; Tomei, Julia

    2009-01-01

    Use of bioenergy as a renewable resource is increasing in many parts of the world and can generate significant environmental, economic and social benefits if managed with due regard to sustainability constraints. This work reviews the environmental, social and economic constraints on key feedstocks for UK heat, power and transport fuel. Key sustainability constraints include greenhouse gas savings achieved for different fuels, land availability, air quality impacts and facility siting. Applying those constraints, we estimate that existing technologies would facilitate a sustainability constrained level of medium-term bioenergy/biofuel supply to the UK of 4.9% of total energy demand, broken down into 4.3% of heat demands, 4.3% of electricity, and 5.8% of transport fuel. This suggests that attempts to increase the supply above these levels could have counterproductive sustainability impacts in the absence of compensating technology developments or identification of additional resources. The barriers that currently prevent this level of supply being achieved have been analysed and classified. This suggests that the biggest policy impacts would be in stimulating the market for heat demand in rural areas, supporting feedstock prices in a manner that incentivised efficient use/maximum greenhouse gas savings and targeting investment capital that improves yield and reduces land-take.

  15. Industrial capacity is not a constraint

    International Nuclear Information System (INIS)

    Walske, C.

    1977-01-01

    The improved rate at which nuclear power plants are likely to be ordered in the next two years will still be well below the annual level needed to meet official planning assumptions. Industry's capability is not a constraint but the government should be more positive on nuclear power, licensing and the fuel cycle. (author)

  16. Hours Constraints Within and Between Jobs

    NARCIS (Netherlands)

    Euwals, R.W.

    1997-01-01

    In the empirical literature on labour supply, several models are developed to incorporate constraints on working hours. These models do not address the question to which extent working hours are constrained within and between jobs. In this paper I investigate the effect of individual changes in

  17. Management practices and production constraints of central ...

    African Journals Online (AJOL)

    management practices of central highland goats and their major constraints. ... tance to improve the goat production potential and livelihood of the farmers in the study ... ing the productivity and income from keeping goats, there is a study gap in ..... and day time, possibly increasing the chance of getting contagious diseases.

  18. Near-Optimal Fingerprinting with Constraints

    Directory of Open Access Journals (Sweden)

    Gulyás Gábor György

    2016-10-01

    Full Text Available Several recent studies have demonstrated that people show large behavioural uniqueness. This has serious privacy implications as most individuals become increasingly re-identifiable in large datasets or can be tracked, while they are browsing the web, using only a couple of their attributes, called as their fingerprints. Often, the success of these attacks depends on explicit constraints on the number of attributes learnable about individuals, i.e., the size of their fingerprints. These constraints can be budget as well as technical constraints imposed by the data holder. For instance, Apple restricts the number of applications that can be called by another application on iOS in order to mitigate the potential privacy threats of leaking the list of installed applications on a device. In this work, we address the problem of identifying the attributes (e.g., smartphone applications that can serve as a fingerprint of users given constraints on the size of the fingerprint. We give the best fingerprinting algorithms in general, and evaluate their effectiveness on several real-world datasets. Our results show that current privacy guards limiting the number of attributes that can be queried about individuals is insufficient to mitigate their potential privacy risks in many practical cases.

  19. Language-universal constraints on speech segmentation

    NARCIS (Netherlands)

    Norris, D.; McQueen, J.M.; Cutler, A.; Butterfield, S.; Kearns, R.K.

    2001-01-01

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC; Norris, McQueen, Cutler & Butterfield, 1997) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavors parses which leave an impossible residue

  20. Neuroplasticity in Constraint-Induced Movement Therapy

    DEFF Research Database (Denmark)

    Blicher, Jakob; Near, Jamie; Næss-Schmidt, Erhard

    2014-01-01

    In healthy subjects, decreasing GABA facilitates motor learning[1]. Recent studies, using PET[2], TMS[3-5], and pharmacological challenges[6], have pointed indirectly to a decrease in neuronal inhibitory activity after stroke. Therefore, we hypothesize that a suppression of GABA levels post strok...... might be beneficial to motor recovery during Constraint-Induced Movement Therapy (CIMT)....

  1. Optimal Environmental Policy Differentials under Emissions Constraints

    NARCIS (Netherlands)

    Florax, R.J.G.M.; Mulatu, A.; Withagen, C.A.A.M.

    2007-01-01

    Is there a case for preferential treatment of the exposed sector in an economy when compliance to an aggregate emissions constraint induced by an international environmental agreement is mandatory? This question is being debated in many countries in the context of the implementation of the Kyoto

  2. An examination of constraints to wilderness visitation

    Science.gov (United States)

    Gary T. Green; J. Michael Bowker; Cassandra Y. Johnson; H. Ken Cordell; Xiongfei Wang

    2007-01-01

    Certain social groups appear notably less in wilderness visitation surveys than their population proportion. This study examines whether different social groups in American society (minorities, women, rural dwellers, low income and less educated populations) perceive more constraints to wilderness visitation than other groups. Logistic regressions were fit to data from...

  3. Groundwater for sustainable development opportunities and constraints

    International Nuclear Information System (INIS)

    Abdel Rahman Attia, F.

    1999-01-01

    This paper discusses water resources availability and demand; concept and constraints of sustainable development; ground water protection. Water issues specific for arid zones and the network on ground water protection in the Arab region are discussed. Recommendations on ground water protection in arid zones are given

  4. Borrowing constraints, multiple equilibria and monetary policy

    NARCIS (Netherlands)

    Assenza, T.

    2007-01-01

    The appealing feature of Kiyotaki and Moore's Financial Accelerator model (Kiyotaki and Moore, 1997, 2002) is the linkage of asset price changes and borrowing constraints. This framework therefore is the natural vehicle to explore the net worth channel of the monetary transmission mechanism. In the

  5. Choice within Constraints: Mothers and Schooling.

    Science.gov (United States)

    David, Miriam; Davies, Jackie; Edwards, Rosalind; Reay, Diane; Standing, Kay

    1997-01-01

    Explores, from a feminist perspective, the discourses of choice regarding how women make their choices as consumers in the education marketplace. It argues that mothers as parents are not free to choose but act within a range of constraints, i.e., their choices are limited by structural and moral possibilities in a patriarchal and racist society.…

  6. Cognitive Dissonance Reduction as Constraint Satisfaction.

    Science.gov (United States)

    Shultz, Thomas R.; Lepper, Mark R.

    1996-01-01

    It is argued that the reduction of cognitive dissonance can be viewed as a constraint satisfaction problem, and a computational model of the process of consonance seeking is proposed. Simulations from this model matched psychological findings from the insufficient justification and free-choice paradigms of cognitive dissonance theory. (SLD)

  7. Data Driven Constraints for the SVM

    DEFF Research Database (Denmark)

    Darkner, Sune; Clemmensen, Line Katrine Harder

    2012-01-01

    We propose a generalized data driven constraint for support vector machines exemplified by classification of paired observations in general and specifically on the human ear canal. This is particularly interesting in dynamic cases such as tissue movement or pathologies developing over time. Assum...

  8. Reinforcement, Behavior Constraint, and the Overjustification Effect.

    Science.gov (United States)

    Williams, Bruce W.

    1980-01-01

    Four levels of the behavior constraint-reinforcement variable were manipulated: attractive reward, unattractive reward, request to perform, and a no-reward control. Only the unattractive reward and request groups showed the performance decrements that suggest the overjustification effect. It is concluded that reinforcement does not cause the…

  9. Affordability Constraints in Major Defense Acquisitions

    Science.gov (United States)

    2016-11-01

    memo, does not provide a detailed recipe for those who must produce quantitative affordability constraints. Enclosure 8 of the January 7, 2015 version...3.0’s full title includes “Achieving Dominant Capabilities 2015 Lot 2028 Lot 2038 Lot $0 $100 $200 $300 $400 $ 500 $600 $700 $800 $900 0 10000 20000

  10. Perceptual Constraints on Infant Memory Retrieval.

    Science.gov (United States)

    Gerhardstein, Peter; Liu, Jane; Rovee-Collier, Carolyn

    1998-01-01

    Three experiments examined characteristics of a stimulus-cueing retrieval from long-term memory for 3-month olds. Used mobiles displaying either Qs (feature-present stimuli) or Os (feature-absent stimuli) and tested 24 hours later. Findings indicated that target-distractor similarity constraints, whether or not a feature-present stimulus, would…

  11. On the canonical treatment of Lagrangian constraints

    International Nuclear Information System (INIS)

    Barbashov, B.M.

    2001-01-01

    The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a special Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge

  12. CONSTRAINTS AND PROBLEMS OF INTERNET SERVICES IN ...

    African Journals Online (AJOL)

    In spite of the benefits of the Internet to learning, teaching and research a number of difficulties still bedevil the provision of services in Nigeria. The objective of this study was to examine the constraints and problems of Internet Services at the Obafemi Awolowo University, Ile-Ife. Questionnaires were administered to ...

  13. Ability or Finances as Constraints on Entrepreneurship?

    DEFF Research Database (Denmark)

    Andersen, Steffen; Meisner Nielsen, Kasper

    2012-01-01

    We use a natural experiment in Denmark to test the hypothesis that aspiring entrepreneurs face financial constraints because of low entrepreneurial quality. We identify 304 constrained entrepreneurs who start a business after receiving windfall wealth and examine the performance of these marginal...

  14. Ability or Finances as Constraints on Entrepreneurship?

    DEFF Research Database (Denmark)

    Andersen, Steffen; Meisner Nielsen, Kasper

    We use a natural experiment in Denmark to test the hypothesis that aspiring entrepreneurs face financial constraints because of low entrepreneurial quality. We identify 304 constrained entrepreneurs who start a business after receiving windfall wealth and examine the performance of these marginal...

  15. Quantum centipedes with strong global constraint

    Science.gov (United States)

    Grange, Pascal

    2017-06-01

    A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N  +  1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.

  16. Egalitarian Risk Sharing under Liquidity Constraints

    NARCIS (Netherlands)

    Koster, M.; Boonen, T.

    2014-01-01

    Undertaking joint projects in practice involves a lot of uncertainty, especially when it comes to the final costs. This paper addresses the problem of sharing realized costs by the participants, subject to their indvidual liquidity constraints. If all cost levels can be accounted for, and it the

  17. Constraints on the CP-Violating MSSM

    CERN Document Server

    Arbey, A; Godbole, R M; Mahmoudi, F

    2016-01-01

    We discuss the prospects for observing CP violation in the MSSM with six CP-violating phases, using a geometric approach to maximise CP-violating observables subject to the experimental upper bounds on electric dipole moments. We consider constraints from Higgs physics, flavour physics, the dark matter relic density and spin-independent scattering cross section with matter.

  18. Constraints To Effective Community Development Projects Among ...

    African Journals Online (AJOL)

    The study focused on the perceived constraints to effective community development projects among rural households in Calabar agricultural zone of Cross River State, Nigeria. Data were collected with the aid of structured questionnaire from 104 randomly selected respondents in the study area. Data analysis was by the ...

  19. Nuclear safety: an operational constraint or necessity

    International Nuclear Information System (INIS)

    Gauvenet, A.

    1983-01-01

    Different aspects of the nuclear safety in the operation of power stations are analysed. There is always a danger that safety is considered as a constraint at operator level, but it is essential that human factors and working conditions be taken into consideration [fr

  20. Tilapia culture in Kuwait: constraints and solutions

    OpenAIRE

    Ridha, M.T.

    2006-01-01

    Tilapia farming in Kuwait is in its early stages. Slow growth, high production cost and poor demand are the major constraints to the expansion of tilapia culture in Kuwait. This article presents some suggestions for overcoming these problems to improve the economic feasibility of tilapia culture in Kuwait.

  1. Constraints on low energy Compton scattering amplitudes

    International Nuclear Information System (INIS)

    Raszillier, I.

    1979-04-01

    We derive the constraints and correlations of fairly general type for Compton scattering amplitudes at energies below photoproduction threshold and fixed momentum transfer, following from (an upper bound on) the corresponding differential cross section above photoproduction threshold. The derivation involves the solution of an extremal problem in a certain space of vector - valued analytic functions. (author)

  2. A Microkernel Architecture for Constraint Programming

    OpenAIRE

    Michel, Laurent; Van Hentenryck, Pascal

    2014-01-01

    This paper presents a microkernel architecture for constraint programming organized around a number of small number of core functionalities and minimal interfaces. The architecture contrasts with the monolithic nature of many implementations. Experimental results indicate that the software engineering benefits are not incompatible with runtime efficiency.

  3. On the canonical treatment of Lagrangian constraints

    International Nuclear Information System (INIS)

    Barbashov, B.M.

    2001-01-01

    The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a specific Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge

  4. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2014-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the ad- vantages of confluence, which include various optimization techniques and simplified...

  5. Constraint Embedding Technique for Multibody System Dynamics

    Science.gov (United States)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with

  6. SOLAR WIND PROTONS AT 1 AU: TRENDS AND BOUNDS, CONSTRAINTS AND CORRELATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Hellinger, Petr; Trávníček, Pavel M., E-mail: petr.hellinger@asu.cas.cz [Astronomical Institute, AS CR, Bocni II/1401,CZ-14100 Prague (Czech Republic)

    2014-03-20

    The proton temperature anisotropy in the solar wind exhibits apparent bounds which are compatible with the theoretical constraints imposed by temperature-anisotropy-driven kinetic instabilities. Recent statistical analyses based on conditional averaging indicate that near these theoretical constraints the solar wind protons typically have enhanced temperatures and a weaker collisionality. Here we carefully analyze the solar wind data and show that these results are a consequence of superposition of multiple correlations in the solar wind, namely, they mostly result from the correlation between the proton temperature and the solar wind velocity and from the superimposed anti-correlation between the proton temperature anisotropy and the proton parallel beta in the fast solar wind. Colder and more collisional data are distributed around temperature isotropy whereas hotter and less collisional data have a wider range of the temperature anisotropy anti-correlated with the proton parallel beta with signatures of constraints owing to the temperature-anisotropy-driven instabilities. However, most of the hot and weakly collisional data, including the hottest and least collisional ones, lies far from the marginal stability regions. Consequently, we conclude that there is no clear relation between the enhanced temperatures and instability constraints and that the conditional averaging used for these analyses must be used carefully and need to be well tested.

  7. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  8. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  9. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  10. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  11. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  12. Toward making the constraint hypersurface an attractor in free evolution

    International Nuclear Information System (INIS)

    Fiske, David R.

    2004-01-01

    When constructing numerical solutions to systems of evolution equations subject to a constraint, one must decide what role the constraint equations will play in the evolution system. In one popular choice, known as free evolution, a simulation is treated as a Cauchy problem, with the initial data constructed to satisfy the constraint equations. This initial data are then evolved via the evolution equations with no further enforcement of the constraint equations. The evolution, however, via the discretized evolution equations introduce constraint violating modes at the level of truncation error, and these constraint violating modes will behave in a formalism dependent way. This paper presents a generic method for incorporating the constraint equations into the evolution equations so that the off-constraint dynamics are biased toward the constraint satisfying solutions

  13. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  14. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  15. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  16. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  17. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  18. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  19. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  20. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  1. Constraints and Creativity in NPD - Testing the Impact of 'Late Constraints'

    DEFF Research Database (Denmark)

    Onarheim, Balder; Valgeirsdóttir, Dagný

    experiment was conducted, involving 12 teams of industrial designers from three different countries, each team working on two 30 minutes design tasks. In one condition all constraints were given at the start, and in the other one new radical constraint was added after 12 minutes. The output from all 24 tasks......The aim of the presented work is to investigate how the timing of project constraints can influence the creativity of the output in New Product Development (NPD) projects. When seeking to produce a creative output, is it beneficial to know all constraints when initiating a project...... was assessed for creativity using the Consensual Assessment Technique (CAT), and a comparative within-subjects analysis found no significant different between the two conditions. Controlling for task and assessor a small but non-significant effect was found, in favor of the ‘late constraint’ condition. Thus...

  2. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  3. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  4. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  5. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  6. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  7. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  8. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  9. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  10. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  11. Constraint qualifications and optimality conditions for optimization problems with cardinality constraints

    Czech Academy of Sciences Publication Activity Database

    Červinka, Michal; Kanzow, Ch.; Schwartz, A.

    2016-01-01

    Roč. 160, č. 1 (2016), s. 353-377 ISSN 0025-5610 R&D Projects: GA ČR GAP402/12/1309; GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Cardinality constraints * Constraint qualifications * Optimality conditions * KKT conditions * Strongly stationary points Subject RIV: BA - General Mathematics Impact factor: 2.446, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/cervinka-0461165.pdf

  12. Medical image segmentation by means of constraint satisfaction neural network

    International Nuclear Information System (INIS)

    Chen, C.T.; Tsao, C.K.; Lin, W.C.

    1990-01-01

    This paper applies the concept of constraint satisfaction neural network (CSNN) to the problem of medical image segmentation. Constraint satisfaction (or constraint propagation), the procedure to achieve global consistency through local computation, is an important paradigm in artificial intelligence. CSNN can be viewed as a three-dimensional neural network, with the two-dimensional image matrix as its base, augmented by various constraint labels for each pixel. These constraint labels can be interpreted as the connections and the topology of the neural network. Through parallel and iterative processes, the CSNN will approach a solution that satisfies the given constraints thus providing segmented regions with global consistency

  13. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  14. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  15. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  16. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  17. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  18. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  19. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  20. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  1. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  2. Optimisation and constraints - a view from ICRP

    International Nuclear Information System (INIS)

    Dunster, H.J.

    1994-01-01

    The optimisation of protection has been the major policy underlying the recommendations of the International Commission on Radiological Protection for more than 20 years. In earlier forms, the concept can be traced back to 1951. Constraints are more recent, appearing in their present form only in the 1990 recommendations of the Commission. The requirement to keep all exposures as low as reasonably achievable applies to both normal and potential exposures. The policy and the techniques are well established for normal exposures, i.e. exposures that are certain to occur. The application to potential exposures, i.e. exposures that have a probability of occurring that is less than unity, is more difficult and is still under international discussion. Constraints are needed to limit the inequity associated with the use of collective dose in cost-benefit analysis and to provide a margin to protect individuals who may be exposed to more than one source. (author)

  3. Constraints on stellar evolution from pulsations

    International Nuclear Information System (INIS)

    Cox, A.N.

    1984-01-01

    Consideration of the many types of intrinsic variable stars, that is, those that pulsate, reveals that perhaps a dozen classes can indicate some constraints that affect the results of stellar evolution calculations, or some interpretations of observations. Many of these constraints are not very strong or may not even be well defined yet. The author discusses the case for six classes: classical Cepheids with their measured Wesselink radii, the observed surface effective temperatures of the known eleven double-mode Cepheids, the pulsation periods and measured surface effective temperatures of three R CrB variables, the delta Scuti variable VZ Cnc with a very large ratio of its two observed periods, the nonradial oscillations of the Sun, and the period ratios of the newly discovered double-mode RR Lyrae variables. (Auth.)

  4. Microbial diversity arising from thermodynamic constraints

    Science.gov (United States)

    Großkopf, Tobias; Soyer, Orkun S

    2016-01-01

    The microbial world displays an immense taxonomic diversity. This diversity is manifested also in a multitude of metabolic pathways that can utilise different substrates and produce different products. Here, we propose that these observations directly link to thermodynamic constraints that inherently arise from the metabolic basis of microbial growth. We show that thermodynamic constraints can enable coexistence of microbes that utilise the same substrate but produce different end products. We find that this thermodynamics-driven emergence of diversity is most relevant for metabolic conversions with low free energy as seen for example under anaerobic conditions, where population dynamics is governed by thermodynamic effects rather than kinetic factors such as substrate uptake rates. These findings provide a general understanding of the microbial diversity based on the first principles of thermodynamics. As such they provide a thermodynamics-based framework for explaining the observed microbial diversity in different natural and synthetic environments. PMID:27035705

  5. Microbial diversity arising from thermodynamic constraints.

    Science.gov (United States)

    Großkopf, Tobias; Soyer, Orkun S

    2016-11-01

    The microbial world displays an immense taxonomic diversity. This diversity is manifested also in a multitude of metabolic pathways that can utilise different substrates and produce different products. Here, we propose that these observations directly link to thermodynamic constraints that inherently arise from the metabolic basis of microbial growth. We show that thermodynamic constraints can enable coexistence of microbes that utilise the same substrate but produce different end products. We find that this thermodynamics-driven emergence of diversity is most relevant for metabolic conversions with low free energy as seen for example under anaerobic conditions, where population dynamics is governed by thermodynamic effects rather than kinetic factors such as substrate uptake rates. These findings provide a general understanding of the microbial diversity based on the first principles of thermodynamics. As such they provide a thermodynamics-based framework for explaining the observed microbial diversity in different natural and synthetic environments.

  6. Locality constraints and 2D quasicrystals

    International Nuclear Information System (INIS)

    Socolar, J.E.S.

    1990-01-01

    The plausible assumption that long-range interactions between atoms are negligible in a quasicrystal leaks to the study of tilings that obey constraints on the local configurations of tiles. The theory of such constraints (called matching rules) for 2D quasicrystal tilings is reviewed here. Different types of matching rules are defined and examples of tilings obeying them are given where known. The role of tile decoration is discussed and is shown to be significant in at least two cases (octagonal and dodecagonal duals of periodic 4-grids and 6-grids). A new result is introduced: a constructive procedure is described for generating weak matching rules for tilings with N-fold symmetry, for any N that is either a prime number or twice a prime number. The physics associated with weak matching rules, results on local growth rules, and the case of icosahedral symmetry are all briefly discussed. (author). 29 refs, 4 figs

  7. Neutrino mass constraints on β decay

    International Nuclear Information System (INIS)

    Ito, Takeyasu M.; Prezeau, Gary

    2005-01-01

    Using the general connection between the upper limit on the neutrino mass and the upper limits on certain types of non-standard-model interactions that can generate loop corrections to the neutrino mass, we derive constraints on some non-standard-model d→ue - ν interactions. When cast into limits on n→pe - ν coupling constants, our results yield constraints on scalar and tensor weak interactions improved by more than an order of magnitude over the current experimental limits. When combined with the existing limits, our results yield vertical bar C S /C V vertical bar or approx. 5x10 -3 , vertical bar C S ' /C V vertical bar or approx. 5x10 -3 , vertical bar C T /C A vertical bar -2 , and vertical bar C T ' /C A vertical bar -2

  8. Constraints from jet calculus on quark recombination

    International Nuclear Information System (INIS)

    Jones, L.M.; Lassila, K.E.; Willen, D.

    1979-01-01

    Within the QCD jet calculus formalism, we deduce an equation describing recombination of quarks and antiquarks into mesons within a quark or gluon jet. This equation relates the recombination function R(x 1 ,x 2 ,x) used in current literature to the fragmentation function for producing that same meson out of the parton initiating the jet. We submit currently used recombination functions to our consistency test, taking as input mainly the u-quark fragmentation data into π + mesons, but also s-quark fragmentation into K - mesons. The constraint is well satisfied at large Q 2 for large moments. Our results depend on one parameter, Q 0 2 , the constraint equation being satisfied for small values of this parameter

  9. Financial Constraints and Nominal Price Rigidities

    DEFF Research Database (Denmark)

    Menno, Dominik Francesco; Balleer, Almut; Hristov, Nikolay

    This paper investigates how financial market imperfections and the frequency of price adjustment interact. Based on new firm-level evidence for Germany, we document that financially constrained firms adjust prices more often than their unconstrained counterparts, both upwards and downwards. We show...... that these empirical patterns are consistent with a partial equilibrium menu-cost model with a working capital constraint. We then use the model to show how the presence of financial frictions changes profits and the price distribution of firms compared to a model without financial frictions. Our results suggest...... that tighter financial constraints are associated with higher nominal rigidities, higher prices and lower output. Moreover, in response to aggregate shocks, aggregate price rigidity moves substantially, the response of inflation is dampened, while output reacts more in the presence of financial frictions...

  10. Automated constraint placement to maintain pile shape

    KAUST Repository

    Hsu, Shu-Wei

    2012-11-01

    We present a simulation control to support art-directable stacking designs by automatically adding constraints to stabilize the stacking structure. We begin by adapting equilibrium analysis in a local scheme to find "stable" objects of the stacking structure. Next, for stabilizing the structure, we pick suitable objects from those passing the equilibrium analysis and then restrict their DOFs by managing the insertion of constraints on them. The method is suitable for controlling stacking behavior of large scale. Results show that our control method can be used in varied ways for creating plausible animation. In addition, the method can be easily implemented as a plug-in into existing simulation solvers without changing the fundamental operations of the solvers. © 2012 ACM.

  11. Fundamental constraints on some event data

    International Nuclear Information System (INIS)

    Watson, I.A.

    1986-01-01

    A modified version of Searle's theory of the structure of human action has been explained and applied to man machine interaction. The comprehensiveness of the theory has been demonstrated, in particular its explanation of human performance and that its consistency with current theories of human error for which it provides an overall setting. The importance of the mental component of human error is highlighted and the constraints that this puts on the collection analysis and use of human error data. Examples have been given to illustrate and apply the theory ranging from considerations of the tenuousness of the link between safety goals and data to simple valve operations. Two approaches which recognise the constraints shown by the theory have been explained. (orig./DG)

  12. Constraints on backreaction in dust universes

    International Nuclear Information System (INIS)

    Raesaenen, Syksy

    2006-01-01

    We study backreaction in dust universes using exact equations which do not rely on perturbation theory, concentrating on theoretical and observational constraints. In particular, we discuss the recent suggestion (Kolb et al 2005 Preprint hep-th/0503117) that superhorizon perturbations could explain present-day accelerated expansion as a useful example which can be ruled out. We note that a backreaction explanation of late-time acceleration will have to involve spatial curvature and subhorizon perturbations

  13. Constraints on Large-Block Shareholders

    OpenAIRE

    Clifford G. Holderness; Dennis P. Sheehan

    1998-01-01

    Corporate managers who own a majority of the common stock in their company or who represent another firm owning such an interest appear to be less constrained than managers of diffusely held firms, yet their power to harm minority shareholders must be circumscribed by some organizational or legal arrangements. Empirical investigations reveal that boards of directors in majority-owned firms are little different from firms with diffuse stock ownership. Another source of constraints on a majorit...

  14. Constraints on fermion mixing with exotics

    International Nuclear Information System (INIS)

    Nardi, E.; Tommasini, D.

    1991-11-01

    We analyze the constraints on the mixing angles of the standard fermions with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets), that appear in many extensions of the electroweak theory. The updated Charged Current and Neutral Current experimental data, including also the recent Z-peak measurements, are considered. The results of the global analysis of all these data are then presented

  15. On the covariantization of the Chiral constraints

    International Nuclear Information System (INIS)

    Wotzasek, Clovis; Abreu, E.M.C. de; Neves, C.

    1994-01-01

    We show that a complete covariantization of the chiral constraint in the Floreanini-Jackiw necessitates an infinite number of auxiliary Wess-Zumino fields otherwise the covariantization is only partial and unable to remove the nonlocality in the chiral boson operator. We comment on recent works that claim to obtain covariantization through the use of Batalin-Fradklin-Tyutin method, that uses just one Wess-Zumino field. (author)

  16. Hours Constraints Within and Between Jobs

    OpenAIRE

    Euwals, R.W.

    1997-01-01

    In the empirical literature on labour supply, several models are developed to incorporate constraints on working hours. These models do not address the question to which extent working hours are constrained within and between jobs. In this paper I investigate the effect of individual changes in labour supply preferences on actual working hours. The availability of subjective information on the individual’s preferred working hours gives direct measures on the degree of adjustment of working ho...

  17. Prominent Constraints Faced by Government Managers.

    Science.gov (United States)

    1983-06-01

    organizations have varied cultures and missions. 8 There has been little researc -h on the identification of these constraints and the effective...Additionally, in the current environment they4 15 are responsible to set the rate schedule for various services provided to NAVAIR, and to market ...NAVAIR. They are paid on the basis of work to be performed. This allows a better cost accounting system, but results in some marketing behavior by the

  18. Work Hours Constraints: Impacts and Policy Implications

    OpenAIRE

    Constant, Amelie F.; Otterbach, Steffen

    2011-01-01

    If individuals reveal their preference as consumers, then they are taken seriously. What happens if individuals, as employees, reveal their preferences in working hours? And what happens if there is a misalignment between actual hours worked and preferred hours, the so-called work hours constraints? How does this affect the productivity of workers, their health, and overall life satisfaction? Labor supply and corresponding demand are fundamental to production. Labor economists know for long t...

  19. Embedded System Synthesis under Memory Constraints

    DEFF Research Database (Denmark)

    Madsen, Jan; Bjørn-Jørgensen, Peter

    1999-01-01

    This paper presents a genetic algorithm to solve the system synthesis problem of mapping a time constrained single-rate system specification onto a given heterogeneous architecture which may contain irregular interconnection structures. The synthesis is performed under memory constraints, that is......, the algorithm takes into account the memory size of processors and the size of interface buffers of communication links, and in particular the complicated interplay of these. The presented algorithm is implemented as part of the LY-COS cosynthesis system....

  20. Least Squares Problems with Absolute Quadratic Constraints

    Directory of Open Access Journals (Sweden)

    R. Schöne

    2012-01-01

    Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.

  1. Information Constraints and Financial Aid Policy

    OpenAIRE

    Judith Scott-Clayton

    2012-01-01

    One justification for public support of higher education is that prospective students, particularly those from underprivileged groups, lack complete information about the costs and benefits of a college degree. Beyond financial considerations, students may also lack information about what they need to do academically to prepare for and successfully complete college. Yet until recently, college aid programs have typically paid little attention to students' information constraints, and the comp...

  2. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    Science.gov (United States)

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  3. Estimating Crustal Properties Directly from Satellite Tracking Data by Using a Topography-based Constraint

    Science.gov (United States)

    Goossens, S. J.; Sabaka, T. J.; Genova, A.; Mazarico, E. M.; Nicholas, J. B.; Neumann, G. A.; Lemoine, F. G.

    2017-12-01

    The crust of a terrestrial planet is formed by differentiation processes in its early history, followed by magmatic evolution of the planetary surface. It is further modified through impact processes. Knowledge of the crustal structure can thus place constraints on the planet's formation and evolution. In particular, the average bulk density of the crust is a fundamental parameter in geophysical studies, such as the determination of crustal thickness, studies of the mechanisms of topography support, and the planet's thermo-chemical evolution. Yet even with in-situ samples available, the crustal density is difficult to determine unambiguously, as exemplified by the results for the Gravity and Recovery Interior Laboratory (GRAIL) mission, which found an average crustal density for the Moon that was lower than generally assumed. The GRAIL results were possible owing to the combination of its high-resolution gravity and high-resolution topography obtained by the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter (LRO), and high correlations between the two datasets. The crustal density can be determined by its contribution to the gravity field of a planet, but at long wavelengths flexure effects can dominate. On the other hand, short-wavelength gravity anomalies are difficult to measure, and either not determined well enough (other than at the Moon), or their power is suppressed by the standard `Kaula' regularization constraint applied during inversion of the gravity field from satellite tracking data. We introduce a new constraint that has infinite variance in one direction, called xa . For constraint damping factors that go to infinity, it can be shown that the solution x becomes equal to a scale factor times xa. This scale factor is completely determined by the data, and we call our constraint rank-minus-1 (RM1). If we choose xa to be topography-induced gravity, then we can estimate the average bulk crustal density directly from the data

  4. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  5. Cosmological constraints with clustering-based redshifts

    Science.gov (United States)

    Kovetz, Ely D.; Raccanelli, Alvise; Rahman, Mubdi

    2017-07-01

    We demonstrate that observations lacking reliable redshift information, such as photometric and radio continuum surveys, can produce robust measurements of cosmological parameters when empowered by clustering-based redshift estimation. This method infers the redshift distribution based on the spatial clustering of sources, using cross-correlation with a reference data set with known redshifts. Applying this method to the existing Sloan Digital Sky Survey (SDSS) photometric galaxies, and projecting to future radio continuum surveys, we show that sources can be efficiently divided into several redshift bins, increasing their ability to constrain cosmological parameters. We forecast constraints on the dark-energy equation of state and on local non-Gaussianity parameters. We explore several pertinent issues, including the trade-off between including more sources and minimizing the overlap between bins, the shot-noise limitations on binning and the predicted performance of the method at high redshifts, and most importantly pay special attention to possible degeneracies with the galaxy bias. Remarkably, we find that once this technique is implemented, constraints on dynamical dark energy from the SDSS imaging catalogue can be competitive with, or better than, those from the spectroscopic BOSS survey and even future planned experiments. Further, constraints on primordial non-Gaussianity from future large-sky radio-continuum surveys can outperform those from the Planck cosmic microwave background experiment and rival those from future spectroscopic galaxy surveys. The application of this method thus holds tremendous promise for cosmology.

  6. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Science.gov (United States)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  7. Constraints based analysis of extended cybernetic models.

    Science.gov (United States)

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Current constraints on the cosmic growth history

    International Nuclear Information System (INIS)

    Bean, Rachel; Tangmatitham, Matipon

    2010-01-01

    We present constraints on the cosmic growth history with recent cosmological data, allowing for deviations from ΛCDM as might arise if cosmic acceleration is due to modifications to general relativity or inhomogeneous dark energy. We combine measures of the cosmic expansion history, from Type 1a supernovae, baryon acoustic oscillations, and the cosmic microwave background (CMB), with constraints on the growth of structure from recent galaxy, CMB, and weak lensing surveys along with integated Sachs Wolfe-galaxy cross correlations. Deviations from ΛCDM are parameterized by phenomenological modifications to the Poisson equation and the relationship between the two Newtonian potentials. We find modifications that are present at the time the CMB is formed are tightly constrained through their impact on the well-measured CMB acoustic peaks. By contrast, constraints on late-time modifications to the growth history, as might arise if modifications are related to the onset of cosmic acceleration, are far weaker, but remain consistent with ΛCDM at the 95% confidence level. For these late-time modifications we find that differences in the evolution on large and small scales could provide an interesting signature by which to search for modified growth histories with future wide angular coverage, large scale structure surveys.

  9. Use of dose constraints in medical exposure

    International Nuclear Information System (INIS)

    Mutanga, N. V. T.

    2013-04-01

    Medical-related radiation is the largest source of controllable radiation exposure to humans and it accounts for more than 95% of radiation exposure from man-made sources. Medical exposure to radiation is exposure incurred by patients as part of their own medical or dental diagnosis or treatment; by persons, other than those occupationally exposed, knowingly, while voluntarily helping in the support and comfort of patients; and by volunteers in a programme of biomedical research involving their exposure. Because it is planned exposure, medical exposure has to conform to a set of principles of protection that apply equally to all controllable exposure situations: the principle of justification, the principle of optimisation of protection, and the principle of application of limits on maximum doses in planned situations. In this study the concept of dose constraints is being scrutinized to see if it can be applied in medical exposures and the benefits of such restrictions. Dose constraints can only be applied to exposure to persons voluntary helping in the support and comfort of patients as well as volunteers in the programme of biomedical research. There are no dose constraints for patients but the concept of reference levels applies. (au)

  10. Constraints on grand unified superstring theories

    International Nuclear Information System (INIS)

    Ellis, J.; Lopez, J.L.; Nanopoulos, D.V.; Houston Advanced Research Center

    1990-01-01

    We evaluate some constraints on the construction of grand unified superstring theories (GUSTs) using higher level Kac-Moody algebras on the world-sheet. In the most general formulation of the heterotic string in four dimensions, an analysis of the basic GUST model-building constraints, including a realistic hidden gauge group, reveals that there are no E 6 models and any SO(10) models can only exist at level-5. Also, any such SU(5) models can exist only for levels 4≤k≤19. These SO(10) and SU(5) models risk having many large, massless, phenomenologically troublesome representations. We also show that with a suitable hidden sector gauge group, it is possible to avoid free light fractionally charged particles, which are endemic to string derived models. We list all such groups and their representations for the flipped SU(5)xU(1) model. We conclude that a sufficiently binding hidden sector gauge group becomes a basic model-building constraint. (orig.)

  11. Constraints on stellar evolution from pulsations

    International Nuclear Information System (INIS)

    Cox, A.N.

    1983-01-01

    Consideration of the many types of intrinsic variable stars, that is, those that pulsate, reveals that perhaps a dozen classes can indicate some constraints that affect the results of stellar evolution calculations, or some interpretations of observations. Many of these constraints are not very strong or may not even be well defined yet. In this review we discuss only the case for six classes: classical Cepheids with their measured Wesselink radii, the observed surface effective temperatures of the known eleven double-mode Cepheids, the pulsation periods and measured surface effective temperatures of three R CrB variables, the delta Scuti variable VZ Cnc with a very large ratio of its two observed periods, the nonradial oscillations of our sun, and the period ratios of the newly discovered double-mode RR Lyrae variables. Unfortunately, the present state of knowledge about the exact compositions; mass loss and its dependence on the mass, radius, luminosity, and composition; ;and internal mixing processes, as well as sometimes the more basic parameters such as luminosities and surface effective temperatures prevent us from applying strong constraints for every case where currently the possibility exists

  12. New constraints for canonical general relativity

    International Nuclear Information System (INIS)

    Reisenberger, M.P.

    1995-01-01

    Ashtekar's canonical theory of classical complex Euclidean GR (no Lorentzian reality conditions) is found to be invariant under the full algebra of infinitesimal 4-diffeomorphisms, but non-invariant under some finite proper 4-diffeos when the densitized dreibein, E a i , is degenerate. The breakdown of 4-diffeo invariance appears to be due to the inability of the Ashtekar Hamiltonian to generate births and deaths of E flux loops (leaving open the possibility that a new 'causality condition' forbidding the birth of flux loops might justify the non-invariance of the theory).A fully 4-diffeo invariant canonical theory in Ashtekar's variables, derived from Plebanski's action, is found to have constraints that are stronger than Ashtekar's for rank E< 2. The corresponding Hamiltonian generates births and deaths of E flux loops.It is argued that this implies a finite amplitude for births and deaths of loops in the physical states of quantum GR in the loop representation, thus modifying this (partly defined) theory substantially.Some of the new constraints are second class, leading to difficulties in quantization in the connection representation. This problem might be overcome in a very nice way by transforming to the classical loop variables, or the 'Faraday line' variables of Newman and Rovelli, and then solving the offending constraints.Note that, though motivated by quantum considerations, the present paper is classical in substance. (orig.)

  13. Tail Risk Constraints and Maximum Entropy

    Directory of Open Access Journals (Sweden)

    Donald Geman

    2015-06-01

    Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.

  14. Constraint theory, singular lagrangians and multitemporal dynamics

    International Nuclear Information System (INIS)

    Lusanna, L.

    1988-01-01

    Singular Lagrangians and constraint theory permeate theoretical physics, as shown by the relevance of gauge theories, string models and general relativity. Their study used finite---dimensional models as a guide to develop the theory, but their main use was in classical field theory, due to the necessity of understanding their quantization. The covariant quantization of singular Lagrangians led to the BRST approach and to the theory of the effective action. On the other hand their phase---space formulation, culminated with the BFV approach for first class, second class and reducible constraints. It, in turn, gave new insights in the theory of singular Lagrangians and constraints and in their cohomological aspects. However the Hamiltonian approach to field theory is highly nontrivial, is open to criticism due to its problems with locality, geometry and manifest covariance and its canonical quantization has still to be developed, because there is no proof of the renormalizability of the Schroedinger representation of field theory. This paper discusses how, notwithstanding these developments, there is still a big amount of ambiguity at every level of the theory

  15. Orthology and paralogy constraints: satisfiability and consistency.

    Science.gov (United States)

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family  G. But is a given set  C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for  G? While previous studies have focused on full sets of constraints, here we consider the general case where  C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is  C satisfiable, i.e. can we find an event-labeled gene tree G inducing  C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.

  16. Canonical and D-transformations in Theories with Constraints

    OpenAIRE

    Gitman, Dmitri M.

    1995-01-01

    A class class of transformations in a super phase space (we call them D-transformations) is described, which play in theories with second-class constraints the role of ordinary canonical transformations in theories without constraints.

  17. Branch and bound algorithms to solve semiring constraint satisfaction problems

    CSIR Research Space (South Africa)

    Leenen, L

    2008-12-01

    Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. Considerable research has been done in solving SCSPs, but limited work has been done in building...

  18. Differential constraints and exact solutions of nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Kaptsov, Oleg V; Verevkin, Igor V

    2003-01-01

    The differential constraints are applied to obtain explicit solutions of nonlinear diffusion equations. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the determining equations used in the search for classical Lie symmetries

  19. A combined constraint handling framework: an empirical study

    DEFF Research Database (Denmark)

    Si, Chengyong; Hu, Junjie; Lan, Tian

    2017-01-01

    This paper presents a new combined constraint handling framework (CCHF) for solving constrained optimization problems (COPs). The framework combines promising aspects of different constraint handling techniques (CHTs) in different situations with consideration of problem characteristics. In order...

  20. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)