WorldWideScience

Sample records for variance-driven time gaps

  1. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...

  2. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  3. Study on variance-to-mean method as subcriticality monitor for accelerator driven system operated with pulse-mode

    International Nuclear Information System (INIS)

    Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu

    2003-01-01

    Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)

  4. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  5. Regression analysis for bivariate gap time with missing first gap time data.

    Science.gov (United States)

    Huang, Chia-Hui; Chen, Yi-Hau

    2017-01-01

    We consider ordered bivariate gap time while data on the first gap time are unobservable. This study is motivated by the HIV infection and AIDS study, where the initial HIV contracting time is unavailable, but the diagnosis times for HIV and AIDS are available. We are interested in studying the risk factors for the gap time between initial HIV contraction and HIV diagnosis, and gap time between HIV and AIDS diagnoses. Besides, the association between the two gap times is also of interest. Accordingly, in the data analysis we are faced with two-fold complexity, namely data on the first gap time is completely missing, and the second gap time is subject to induced informative censoring due to dependence between the two gap times. We propose a modeling framework for regression analysis of bivariate gap time under the complexity of the data. The estimating equations for the covariate effects on, as well as the association between, the two gap times are derived through maximum likelihood and suitable counting processes. Large sample properties of the resulting estimators are developed by martingale theory. Simulations are performed to examine the performance of the proposed analysis procedure. An application of data from the HIV and AIDS study mentioned above is reported for illustration.

  6. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  7. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  8. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  9. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  10. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  11. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  12. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  13. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  14. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  15. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  16. Dynamic Maternal Gradients Control Timing and Shift-Rates for Drosophila Gap Gene Expression

    Science.gov (United States)

    Verd, Berta; Crombach, Anton

    2017-01-01

    Pattern formation during development is a highly dynamic process. In spite of this, few experimental and modelling approaches take into account the explicit time-dependence of the rules governing regulatory systems. We address this problem by studying dynamic morphogen interpretation by the gap gene network in Drosophila melanogaster. Gap genes are involved in segment determination during early embryogenesis. They are activated by maternal morphogen gradients encoded by bicoid (bcd) and caudal (cad). These gradients decay at the same time-scale as the establishment of the antero-posterior gap gene pattern. We use a reverse-engineering approach, based on data-driven regulatory models called gene circuits, to isolate and characterise the explicitly time-dependent effects of changing morphogen concentrations on gap gene regulation. To achieve this, we simulate the system in the presence and absence of dynamic gradient decay. Comparison between these simulations reveals that maternal morphogen decay controls the timing and limits the rate of gap gene expression. In the anterior of the embyro, it affects peak expression and leads to the establishment of smooth spatial boundaries between gap domains. In the posterior of the embryo, it causes a progressive slow-down in the rate of gap domain shifts, which is necessary to correctly position domain boundaries and to stabilise the spatial gap gene expression pattern. We use a newly developed method for the analysis of transient dynamics in non-autonomous (time-variable) systems to understand the regulatory causes of these effects. By providing a rigorous mechanistic explanation for the role of maternal gradient decay in gap gene regulation, our study demonstrates that such analyses are feasible and reveal important aspects of dynamic gene regulation which would have been missed by a traditional steady-state approach. More generally, it highlights the importance of transient dynamics for understanding complex regulatory

  17. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  19. Axisymmetrical particle-in-cell/Monte Carlo simulation of narrow gap planar magnetron plasmas. I. Direct current-driven discharge

    International Nuclear Information System (INIS)

    Kondo, Shuji; Nanbu, Kenichi

    2001-01-01

    An axisymmetrical particle-in-cell/Monte Carlo simulation is performed for modeling direct current-driven planar magnetron discharge. The axisymmetrical structure of plasma parameters such as plasma density, electric field, and electron and ion energy is examined in detail. The effects of applied voltage and magnetic field strength on the discharge are also clarified. The model apparatus has a narrow target-anode gap of 20 mm to make the computational time manageable. This resulted in the current densities which are very low compared to actual experimental results for a wider target-anode gap. The current-voltage characteristics show a negative slope in contrast with many experimental results. However, this is understandable from Gu and Lieberman's similarity equation. The negative slope appears to be due to the narrow gap

  20. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  1. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  2. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  3. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  4. Gap Year: Time off, with a Plan

    Science.gov (United States)

    Torpey, Elka Maria

    2009-01-01

    A gap year allows people to step off the usual educational or career path and reassess their future. According to people who have taken a gap year, the time away can be well worth it. This article can help a person decide whether to take a gap year and how to make the most of his time off. It describes what a gap year is, including its pros and…

  5. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  6. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    Science.gov (United States)

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  7. Time-Consistent Strategies for a Multiperiod Mean-Variance Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Huiling Wu

    2013-01-01

    Full Text Available It remained prevalent in the past years to obtain the precommitment strategies for Markowitz's mean-variance portfolio optimization problems, but not much is known about their time-consistent strategies. This paper takes a step to investigate the time-consistent Nash equilibrium strategies for a multiperiod mean-variance portfolio selection problem. Under the assumption that the risk aversion is, respectively, a constant and a function of current wealth level, we obtain the explicit expressions for the time-consistent Nash equilibrium strategy and the equilibrium value function. Many interesting properties of the time-consistent results are identified through numerical sensitivity analysis and by comparing them with the classical pre-commitment solutions.

  8. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    Science.gov (United States)

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  9. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  10. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  11. Construction and properties of a topological index for periodically driven time-reversal invariant 2D crystals

    Directory of Open Access Journals (Sweden)

    D. Carpentier

    2015-07-01

    Full Text Available We present mathematical details of the construction of a topological invariant for periodically driven two-dimensional lattice systems with time-reversal symmetry and quasienergy gaps, which was proposed recently by some of us. The invariant is represented by a gap-dependent Z2-valued index that is simply related to the Kane–Mele invariants of quasienergy bands but contains an extra information. As a byproduct, we prove new expressions for the two-dimensional Kane–Mele invariant relating the latter to Wess–Zumino amplitudes and the boundary gauge anomaly.

  12. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  13. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  14. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  15. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  16. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  17. Defining the Costs of Reusable Flexible Ureteroscope Reprocessing Using Time-Driven Activity-Based Costing.

    Science.gov (United States)

    Isaacson, Dylan; Ahmad, Tessnim; Metzler, Ian; Tzou, David T; Taguchi, Kazumi; Usawachintachit, Manint; Zetumer, Samuel; Sherer, Benjamin; Stoller, Marshall; Chi, Thomas

    2017-10-01

    Careful decontamination and sterilization of reusable flexible ureteroscopes used in ureterorenoscopy cases prevent the spread of infectious pathogens to patients and technicians. However, inefficient reprocessing and unavailability of ureteroscopes sent out for repair can contribute to expensive operating room (OR) delays. Time-driven activity-based costing (TDABC) was applied to describe the time and costs involved in reprocessing. Direct observation and timing were performed for all steps in reprocessing of reusable flexible ureteroscopes following operative procedures. Estimated times needed for each step by which damaged ureteroscopes identified during reprocessing are sent for repair were characterized through interviews with purchasing analyst staff. Process maps were created for reprocessing and repair detailing individual step times and their variances. Cost data for labor and disposables used were applied to calculate per minute and average step costs. Ten ureteroscopes were followed through reprocessing. Process mapping for ureteroscope reprocessing averaged 229.0 ± 74.4 minutes, whereas sending a ureteroscope for repair required an estimated 143 minutes per repair. Most steps demonstrated low variance between timed observations. Ureteroscope drying was the longest and highest variance step at 126.5 ± 55.7 minutes and was highly dependent on manual air flushing through the ureteroscope working channel and ureteroscope positioning in the drying cabinet. Total costs for reprocessing totaled $96.13 per episode, including the cost of labor and disposable items. Utilizing TDABC delineates the full spectrum of costs associated with ureteroscope reprocessing and identifies areas for process improvement to drive value-based care. At our institution, ureteroscope drying was one clearly identified target area. Implementing training in ureteroscope drying technique could save up to 2 hours per reprocessing event, potentially preventing expensive OR delays.

  18. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    Science.gov (United States)

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-01-01

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974

  19. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  20. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  1. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  2. Discrimination of shot-noise-driven Poisson processes by external dead time - Application of radioluminescence from glass

    Science.gov (United States)

    Saleh, B. E. A.; Tavolacci, J. T.; Teich, M. C.

    1981-01-01

    Ways in which dead time can be used to constructively enhance or diminish the effects of point processes that display bunching in the shot-noise-driven doubly stochastic Poisson point process (SNDP) are discussed. Interrelations between photocount bunching arising in the SNDP and the antibunching character arising from dead-time effects are investigated. It is demonstrated that the dead-time-modified count mean and variance for an arbitrary doubly stochastic Poisson point process can be obtained from the Laplace transform of the single-fold and joint-moment-generating functions for the driving rate process. The theory is in good agreement with experimental values for radioluminescence radiation in fused silica, quartz, and glass, and the process has many applications in pulse, particle, and photon detection.

  3. Just-in-time Database-Driven Web Applications

    Science.gov (United States)

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  4. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2015-01-01

    Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

  5. Controllable Absorption and Dispersion Properties of an RF-driven Five-Level Atom in a Double-Band Photonic-Band-Gap Material

    International Nuclear Information System (INIS)

    Ding Chunling; Li Jiahua; Yang Xiaoxue

    2011-01-01

    The probe absorption-dispersion spectra of a radio-frequency (RF)-driven five-level atom embedded in a photonic crystal are investigated by considering the isotropic double-band photonic-band-gap (PBG) reservoir. In the model used, the two transitions are, respectively, coupled by the upper and lower bands in such a PBG material, thus leading to some curious phenomena. Numerical simulations are performed for the optical spectra. It is found that when one transition frequency is inside the band gap and the other is outside the gap, there emerge three peaks in the absorption spectra. However, for the case that two transition frequencies lie inside or outside the band gap, the spectra display four absorption profiles. Especially, there appear two sharp peaks in the spectra when both transition frequencies exist inside the band gap. The influences of the intensity and frequency of the RF-driven field on the absorptive and dispersive response are analyzed under different band-edge positions. It is found that a transparency window appears in the absorption spectra and is accompanied by a very steep variation of the dispersion profile by adjusting system parameters. These results show that the absorption-dispersion properties of the system depend strongly on the RF-induced quantum interference and the density of states (DOS) of the PBG reservoir. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  6. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  7. Output gap uncertainty and real-time monetary policy

    Directory of Open Access Journals (Sweden)

    Francesco Grigoli

    2015-12-01

    Full Text Available Output gap estimates are subject to a wide range of uncertainty owing principally to the difficulty in distinguishing between cycle and trend in real time. We show that country desks tend to overestimate economic slack, especially during recessions, and that uncertainty in initial output gap estimates persists several years. Only a small share of output gap revisions is predictable based on output dynamics, data quality, and policy frameworks. We also show that for a group of Latin American inflation targeters the prescriptions from monetary policy rules are subject to large changes due to revised output gap estimates. These explain a sizable proportion of the deviation of inflation from target, suggesting this information is not accounted for in real-time policy decisions.

  8. Electron mobility variance in the presence of an electric field: Electron-phonon field-induced tunnel scattering

    International Nuclear Information System (INIS)

    Melkonyan, S.V.

    2012-01-01

    The problem of electron mobility variance is discussed. It is established that in equilibrium semiconductors the mobility variance is infinite. It is revealed that the cause of the mobility variance infinity is the threshold of phonon emission. The electron-phonon interaction theory in the presence of an electric field is developed. A new mechanism of electron scattering, called electron-phonon field-induced tunnel (FIT) scattering, is observed. The effect of the electron-phonon FIT scattering is explained in terms of penetration of the electron wave function into the semiconductor band gap in the presence of an electric field. New and more general expressions for the electron-non-polar optical phonon scattering probability and relaxation time are obtained. The results show that FIT transitions have principle meaning for the mobility fluctuation theory: mobility variance becomes finite.

  9. Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE)

    Science.gov (United States)

    Judkewitz, Benjamin; Wang, Ying Min; Horstmeyer, Roarke; Mathy, Alexandre; Yang, Changhuei

    2013-04-01

    Focusing of light in the diffusive regime inside scattering media has long been considered impossible. Recently, this limitation has been overcome with time reversal of ultrasound-encoded light (TRUE), but the resolution of this approach is fundamentally limited by the large number of optical modes within the ultrasound focus. Here, we introduce a new approach, time reversal of variance-encoded light (TROVE), which demixes these spatial modes by variance encoding to break the resolution barrier imposed by the ultrasound. By encoding individual spatial modes inside the scattering sample with unique variances, we effectively uncouple the system resolution from the size of the ultrasound focus. This enables us to demonstrate optical focusing and imaging with diffuse light at an unprecedented, speckle-scale lateral resolution of ~5 µm.

  10. Speckle-scale focusing in the diffusive regime with time-reversal of variance-encoded light (TROVE).

    Science.gov (United States)

    Judkewitz, Benjamin; Wang, Ying Min; Horstmeyer, Roarke; Mathy, Alexandre; Yang, Changhuei

    2013-04-01

    Focusing of light in the diffusive regime inside scattering media has long been considered impossible. Recently, this limitation has been overcome with time reversal of ultrasound-encoded light (TRUE), but the resolution of this approach is fundamentally limited by the large number of optical modes within the ultrasound focus. Here, we introduce a new approach, time reversal of variance-encoded light (TROVE), which demixes these spatial modes by variance-encoding to break the resolution barrier imposed by the ultrasound. By encoding individual spatial modes inside the scattering sample with unique variances, we effectively uncouple the system resolution from the size of the ultrasound focus. This enables us to demonstrate optical focusing and imaging with diffuse light at unprecedented, speckle-scale lateral resolution of ~ 5 μm.

  11. Noise-Driven Phenotypic Heterogeneity with Finite Correlation Time in Clonal Populations.

    Directory of Open Access Journals (Sweden)

    UnJin Lee

    Full Text Available There has been increasing awareness in the wider biological community of the role of clonal phenotypic heterogeneity in playing key roles in phenomena such as cellular bet-hedging and decision making, as in the case of the phage-λ lysis/lysogeny and B. Subtilis competence/vegetative pathways. Here, we report on the effect of stochasticity in growth rate, cellular memory/intermittency, and its relation to phenotypic heterogeneity. We first present a linear stochastic differential model with finite auto-correlation time, where a randomly fluctuating growth rate with a negative average is shown to result in exponential growth for sufficiently large fluctuations in growth rate. We then present a non-linear stochastic self-regulation model where the loss of coherent self-regulation and an increase in noise can induce a shift from bounded to unbounded growth. An important consequence of these models is that while the average change in phenotype may not differ for various parameter sets, the variance of the resulting distributions may considerably change. This demonstrates the necessity of understanding the influence of variance and heterogeneity within seemingly identical clonal populations, while providing a mechanism for varying functional consequences of such heterogeneity. Our results highlight the importance of a paradigm shift from a deterministic to a probabilistic view of clonality in understanding selection as an optimization problem on noise-driven processes, resulting in a wide range of biological implications, from robustness to environmental stress to the development of drug resistance.

  12. Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.

    Science.gov (United States)

    Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong

    2018-03-01

    Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.

  13. Band Gap Distortion in Semiconductors Strongly Driven by Intense Mid-Infrared Laser Fields

    Science.gov (United States)

    Kono, J.; Chin, A. H.

    2000-03-01

    Crystalline solids non-resonantly driven by intense time-periodic electric fields are predicted to exhibit unusual band-gap distortion.(e.g., Y. Yacoby, Phys. Rev. 169, 610 (1968); L.C.M. Miranda, Solid State Commun. 45, 783 (1983); J.Z. Kaminski, Acta Physica Polonica A 83, 495(1993).) Such non-perturbative effects have not been observed to date because of the unavoidable sample damage due to the very high intensity required using conventional lasers ( 1 eV photon energy). Here, we report the first clear evidence of laser-induced bandgap shrinkage in semiconductors under intense mid-infrared (MIR) laser fields. The use of long-wavelength light reduces the required intensity and prohibits strong interband absorption, thereby avoiding the damage problem. The significant sub-bandgap absorption persists only during the existence of the MIR laser pulse, indicating the virtual nature of the effect. We show that this particular example of non-perturbative behavior, known as the dynamical Franz-Keldysh effect, occurs when the effective ponderomotive potential energy is comparable to the photon energy of the applied field. This work was supported by ONR, NSF, JST and NEDO.

  14. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  15. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    International Nuclear Information System (INIS)

    Kharroubi, Idris; Lim, Thomas; Ngoupeyou, Armand

    2013-01-01

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory

  16. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    Energy Technology Data Exchange (ETDEWEB)

    Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr [Université Paris Dauphine, CEREMADE, CNRS UMR 7534 (France); Lim, Thomas, E-mail: lim@ensiie.fr [Université d’Evry and ENSIIE, Laboratoire d’Analyse et Probabilités (France); Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr [Université Paris 7, Laboratoire de Probabilités et Modèles Aléatoires (France)

    2013-12-15

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.

  17. Multi-Period Mean-Variance Portfolio Selection with Uncertain Time Horizon When Returns Are Serially Correlated

    Directory of Open Access Journals (Sweden)

    Ling Zhang

    2012-01-01

    Full Text Available We study a multi-period mean-variance portfolio selection problem with an uncertain time horizon and serial correlations. Firstly, we embed the nonseparable multi-period optimization problem into a separable quadratic optimization problem with uncertain exit time by employing the embedding technique of Li and Ng (2000. Then we convert the later into an optimization problem with deterministic exit time. Finally, using the dynamic programming approach, we explicitly derive the optimal strategy and the efficient frontier for the dynamic mean-variance optimization problem. A numerical example with AR(1 return process is also presented, which shows that both the uncertainty of exit time and the serial correlations of returns have significant impacts on the optimal strategy and the efficient frontier.

  18. Influence of Iatrogenic Gaps, Cement Type, and Time on Microleakage of Cast Posts Using Spectrophotometer and Glucose Filtration Measurements.

    Science.gov (United States)

    Al-Madi, Ebtissam M; Al-Saleh, Samar A; Al-Khudairy, Reem I; Aba-Hussein, Taibah W

    2018-04-06

    To determine the influence of iatrogenic gaps, type of cement, and time on microleakage of cast posts using spectrophotometer and glucose filtration measurements. Forty-eight single-rooted teeth were divided into eight groups of six teeth each. Teeth were instrumented and obturated, and a cast post was fabricated. In addition to two control groups (positive and negative), a total of six groups were prepared: In four groups, an artificial 2- to 3-mm gap was created between post and residual gutta percha (GP), and two groups were prepared with intimate contact between post and residual GP. Posts were cemented with either zinc phosphate cement or resin cement. Leakage through the post after 1, 8, 14, and 20 days was measured using a glucose penetration model with two different reading methods. Mixed analysis of variance tests were performed to analyze the data. The presence of a gap between the apical end of the post and the most coronal portion of the GP remaining in the root canal after post space preparation increased microleakage significantly. However, microleakage was significantly less when the gap was refilled with GP compared to no gap. There was no difference in leakage between luting cements used. It was concluded that none of the cements were able to prevent microleakage. However, the addition of GP to residual GP did increase the sealing ability.

  19. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    Science.gov (United States)

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  20. Electromagnetic Properties Analysis on Hybrid-driven System of Electromagnetic Motor

    Science.gov (United States)

    Zhao, Jingbo; Han, Bingyuan; Bei, Shaoyi

    2018-01-01

    The hybrid-driven system made of permanent-and electromagnets applied in the electromagnetic motor was analyzed, equivalent magnetic circuit was used to establish the mathematical models of hybrid-driven system, based on the models of hybrid-driven system, the air gap flux, air-gap magnetic flux density, electromagnetic force was proposed. Taking the air-gap magnetic flux density and electromagnetic force as main research object, the hybrid-driven system was researched. Electromagnetic properties of hybrid-driven system with different working current modes is studied preliminary. The results shown that analysis based on hybrid-driven system can improve the air-gap magnetic flux density and electromagnetic force more effectively and can also guarantee the output stability, the effectiveness and feasibility of the hybrid-driven system are verified, which proved theoretical basis for the design of hybrid-driven system.

  1. The link between response time and preference, variance and processing heterogeneity in stated choice experiments

    DEFF Research Database (Denmark)

    Campbell, Danny; Mørkbak, Morten Raun; Olsen, Søren Bøye

    2018-01-01

    In this article we utilize the time respondents require to answer a self-administered online stated preference survey. While the effects of response time have been previously explored, this article proposes a different approach that explicitly recognizes the highly equivocal relationship between ...... between response time and utility coefficients, error variance and processing strategies. Our results thus emphasize the importance of considering response time when modeling stated choice data....... response time and respondents' choices. In particular, we attempt to disentangle preference, variance and processing heterogeneity and explore whether response time helps to explain these three types of heterogeneity. For this, we divide the data (ordered by response time) into approximately equal......-sized subsets, and then derive different class membership probabilities for each subset. We estimate a large number of candidate models and subsequently conduct a frequentist-based model averaging approach using information criteria to derive weights of evidence for each model. Our findings show a clear link...

  2. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  3. Diffusion-advection within dynamic biological gaps driven by structural motion

    Science.gov (United States)

    Asaro, Robert J.; Zhu, Qiang; Lin, Kuanpo

    2018-04-01

    To study the significance of advection in the transport of solutes, or particles, within thin biological gaps (channels), we examine theoretically the process driven by stochastic fluid flow caused by random thermal structural motion, and we compare it with transport via diffusion. The model geometry chosen resembles the synaptic cleft; this choice is motivated by the cleft's readily modeled structure, which allows for well-defined mechanical and physical features that control the advection process. Our analysis defines a Péclet-like number, AD, that quantifies the ratio of time scales of advection versus diffusion. Another parameter, AM, is also defined by the analysis that quantifies the full potential extent of advection in the absence of diffusion. These parameters provide a clear and compact description of the interplay among the well-defined structural, geometric, and physical properties vis-a ̀-vis the advection versus diffusion process. For example, it is found that AD˜1 /R2 , where R is the cleft diameter and hence diffusion distance. This curious, and perhaps unexpected, result follows from the dependence of structural motion that drives fluid flow on R . AM, on the other hand, is directly related (essentially proportional to) the energetic input into structural motion, and thereby to fluid flow, as well as to the mechanical stiffness of the cleftlike structure. Our model analysis thus provides unambiguous insight into the prospect of competition of advection versus diffusion within biological gaplike structures. The importance of the random, versus a regular, nature of structural motion and of the resulting transient nature of advection under random motion is made clear in our analysis. Further, by quantifying the effects of geometric and physical properties on the competition between advection and diffusion, our results clearly demonstrate the important role that metabolic energy (ATP) plays in this competitive process.

  4. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  5. Physics of energetic particle-driven instabilities in the START spherical tokamak

    International Nuclear Information System (INIS)

    McClements, K.G.; Gryaznevich, M.P.; Akers, R.J.; Appel, L.C.; Counsell, G.F.; Roach, C.M.; Sharapov, S.E.; Majeski, R.

    1999-01-01

    The recent use of neutral beam injection (NBI) in the UKAEA small tight aspect ratio tokamak (START) has provided the first opportunity to study experimentally the physics of energetic ions in spherical tokamak (ST) plasmas. In such devices the ratio of major radius to minor radius R 0 /a is of order unity. Several distinct classes of NBI-driven instability have been observed at frequencies up to 1 MHz during START discharges. These observations are described, and possible interpretations are given. Equilibrium data, corresponding to times of beam-driven wave activity, are used to compute continuous shear Alfven spectra: toroidicity and high plasma beta give rise to wide spectral gaps, extending up to frequencies of several times the Alfven gap frequency. In each of these gaps Alfvenic instabilities could, in principle, be driven by energetic ions. Chirping modes observed at high beta in this frequency range have bandwidths comparable to or greater than the gap widths. Instability drive in START is provided by beam ion pressure gradients (as in conventional tokamaks), and also by positive gradients in beam ion velocity distributions, which arise from velocity-dependent charge exchange losses. It is shown that fishbone-like bursts observed at a few tens of kHz can be attributed to internal kink mode excitation by passing beam ions, while narrow-band emission at several hundred kHz may be due to excitation of fast Alfven (magnetosonic) eigenmodes. In the light of our understanding of energetic particle-driven instabilities in START, the possible existence of such instabilities in larger STs is discussed. (author)

  6. Analysis of rhythmic variance - ANORVA. A new simple method for detecting rhythms in biological time series

    Directory of Open Access Journals (Sweden)

    Peter Celec

    2004-01-01

    Full Text Available Cyclic variations of variables are ubiquitous in biomedical science. A number of methods for detecting rhythms have been developed, but they are often difficult to interpret. A simple procedure for detecting cyclic variations in biological time series and quantification of their probability is presented here. Analysis of rhythmic variance (ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries. A detailed stepwise calculation is presented including data entry and preparation, variance calculating, and difference testing. An example for the application of the procedure is provided, and a real dataset of the number of papers published per day in January 2003 using selected keywords is compared to randomized datasets. Randomized datasets show no cyclic variations. The number of papers published daily, however, shows a clear and significant (p<0.03 circaseptan (period of 7 days rhythm, probably of social origin

  7. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

    Science.gov (United States)

    Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

    2011-07-01

    Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

  8. On the Formation of Multiple Concentric Rings and Gaps in Protoplanetary Disks

    Science.gov (United States)

    Bae, Jaehan; Zhu, Zhaohuan; Hartmann, Lee

    2017-12-01

    As spiral waves driven by a planet in a gaseous disk steepen into a shock, they deposit angular momentum, opening a gap in the disk. This has been well studied using both linear theory and numerical simulations, but so far only for the primary spiral arm: the one directly attached to the planet. Using 2D hydrodynamic simulations, we show that the secondary and tertiary arms driven by a planet can also open gaps as they steepen into shocks. The depths of the secondary/tertiary gaps in surface density grow with time in a low-viscosity disk (α =5× {10}-5), so even low-mass planets (e.g., super-Earth or mini-Neptune-mass) embedded in the disk can open multiple observable gaps, provided that sufficient time has passed. Applying our results to the HL Tau disk, we show that a single 30 Earth-mass planet embedded in the ring at 68.8 au (B5) can reasonably well reproduce the positions of the two major gaps at 13.2 and 32.3 au (D1 and D2), and roughly reproduce two other major gaps at 64.2 and 74.7 au (D5 and D6) seen in the mm continuum. The positions of secondary/tertiary gaps are found to be sensitive to the planetary mass and the disk temperature profile, so with accurate observational measurements of the temperature structure, the positions of multiple gaps can be used to constrain the mass of the planet. We also comment on the gaps seen in the TW Hya and HD 163296 disk.

  9. Time structure measurement of the ATLAS RPC gap current

    CERN Document Server

    Aielli, G; The ATLAS collaboration

    2010-01-01

    The current absorbed by an RPC represents the sum of the charge delivered in the gas by the ionizing events interesting the gap, integrated by the electrodes time constant. This is typically of the order of tens of ms thus dominating the gas discharge time scale and characterizing the granular structure observed in the current signal. In most cases this structure is considered as noise to be further integrated to observe the average gap current, used often as a detector monitoring parameter or to precisely measure the uncorrelated background rate effects. A remarkable case is given if a large number of particles is passing trough the detector within an integration time constant producing a current peak clearly detectable above the average noise. The ATLAS RPC system is equipped with a dedicated current monitoring based on an ADC capable of reading out the average value as well as the transient peaks of the currents above a given threshold. A study on such data was used to spot the gap HV noise, to monitor the...

  10. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  11. Gender Gaps in High School Students' Homework Time

    Science.gov (United States)

    Gershenson, Seth; Holt, Stephen B.

    2015-01-01

    Gender differences in human capital investments made outside of the traditional school day suggest that males and females consume, respond to, and form habits relating to education differently. We document robust, statistically significant one-hour weekly gender gaps in secondary students' non-school study time using time diary data from the…

  12. A Four-Gap Glass-RPC Time-of-Flight Array with 90 ps Time Resolution

    CERN Document Server

    Akindinov, A; Formenti, F; Golovine, V; Klempt, W; Kluge, A; Martemyanov, A N; Martinengo, P; Pinhão, J; Smirnitsky, A V; Spegel, M; Szymanski, P; Zalipska, J

    2001-01-01

    In this paper, we describe the performance of a prototype developed in the context of the ALICE time-of-flight research and development system. The detector module consists of a 32-channel array of 3 x 3 cm2 glass resistive plate chamber (RPC) cells, each of which has four accurately space gaps of 0.3 mm thickness arranged as a pair of double-gap resisitive plate chambers. Operated with a nonflammable gas mixture at atmospheric pressure, the system achieved a time resolution of 90 ps at 98% efficiency with good uniformity and moderate crosstalk. This result shows the feasibility of large-area high-resolution time-of-flight systems based on RPCs at affordable cost.

  13. Age-dependent changes in mean and variance of gene expression across tissues in a twin cohort.

    Science.gov (United States)

    Viñuela, Ana; Brown, Andrew A; Buil, Alfonso; Tsai, Pei-Chien; Davies, Matthew N; Bell, Jordana T; Dermitzakis, Emmanouil T; Spector, Timothy D; Small, Kerrin S

    2018-02-15

    Changes in the mean and variance of gene expression with age have consequences for healthy aging and disease development. Age-dependent changes in phenotypic variance have been associated with a decline in regulatory functions leading to increase in disease risk. Here, we investigate age-related mean and variance changes in gene expression measured by RNA-seq of fat, skin, whole blood and derived lymphoblastoid cell lines (LCLs) expression from 855 adult female twins. We see evidence of up to 60% of age effects on transcription levels shared across tissues, and 47% of those on splicing. Using gene expression variance and discordance between genetically identical MZ twin pairs, we identify 137 genes with age-related changes in variance and 42 genes with age-related discordance between co-twins; implying the latter are driven by environmental effects. We identify four eQTLs whose effect on expression is age-dependent (FDR 5%). Combined, these results show a complicated mix of environmental and genetically driven changes in expression with age. Using the twin structure in our data, we show that additive genetic effects explain considerably more of the variance in gene expression than aging, but less that other environmental factors, potentially explaining why reliable expression-derived biomarkers for healthy-aging have proved elusive compared with those derived from methylation. © The Author(s) 2017. Published by Oxford University Press.

  14. Universal interaction-driven gap in metallic carbon nanotubes

    Science.gov (United States)

    Senger, Mitchell J.; McCulley, Daniel R.; Lotfizadeh, Neda; Deshpande, Vikram V.; Minot, Ethan D.

    2018-02-01

    Suspended metallic carbon nanotubes (m-CNTs) exhibit a remarkably large transport gap that can exceed 100 meV. Both experiment and theory suggest that strong electron-electron interactions play a crucial role in generating this electronic structure. To further understand this strongly interacting system, we have performed electronic measurements of suspended m-CNTs with known diameter and chiral angle. Spectrally resolved photocurrent microscopy was used to determine m-CNT structure. The room-temperature electrical characteristics of 18 individually contacted m-CNTs were compared to their respective diameter and chiral angle. At the charge neutrality point, we observe a peak in m-CNT resistance that scales exponentially with inverse diameter. Using a thermally activated transport model, we estimate that the transport gap is (450 meV nm)/D , where D is CNT diameter. We find no correlation between the gap and the CNT chiral angle. Our results add important constraints to theories attempting to describe the electronic structure of m-CNTs.

  15. Driven Quantum Dynamics: Will It Blend?

    Directory of Open Access Journals (Sweden)

    Leonardo Banchi

    2017-10-01

    Full Text Available Randomness is an essential tool in many disciplines of modern sciences, such as cryptography, black hole physics, random matrix theory, and Monte Carlo sampling. In quantum systems, random operations can be obtained via random circuits thanks to so-called q-designs and play a central role in condensed-matter physics and in the fast scrambling conjecture for black holes. Here, we consider a more physically motivated way of generating random evolutions by exploiting the many-body dynamics of a quantum system driven with stochastic external pulses. We combine techniques from quantum control, open quantum systems, and exactly solvable models (via the Bethe ansatz to generate Haar-uniform random operations in driven many-body systems. We show that any fully controllable system converges to a unitary q-design in the long-time limit. Moreover, we study the convergence time of a driven spin chain by mapping its random evolution into a semigroup with an integrable Liouvillian and finding its gap. Remarkably, we find via Bethe-ansatz techniques that the gap is independent of q. We use mean-field techniques to argue that this property may be typical for other controllable systems, although we explicitly construct counterexamples via symmetry-breaking arguments to show that this is not always the case. Our findings open up new physical methods to transform classical randomness into quantum randomness, via a combination of quantum many-body dynamics and random driving.

  16. Convergence in Sleep Time Accomplished? Gender Gap in Sleep Time for Middle-Aged Adults in Korea.

    Science.gov (United States)

    Cha, Seung-Eun; Eun, Ki-Soo

    2018-04-19

    Although the gender gap in sleep time has narrowed significantly in the last decade, middle-aged women between ages 35 and 60 still sleep less than their male counterparts in Korea. This study examines and provides evidence for factors contributing to the gender gap in this age group. Using Korean Time Use Survey (KTUS) data from 2004, 2009 and 2014, we find that middle-aged women’s difficulty in managing work-life balance and traditional role expectations placed upon women are the main causes of the gender gap in sleep time. The decomposition analysis reveals that the improved socioeconomic status and recent changes in familial expectations for women may have helped them sleep more than in the past. However, there remain fundamental differences in attitude and time use patterns between men and women that prevent middle-aged women from getting the same amount of sleep.

  17. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  18. Time-Driven Activity-Based Costing in Emergency Medicine.

    Science.gov (United States)

    Yun, Brian J; Prabhakar, Anand M; Warsh, Jonathan; Kaplan, Robert; Brennan, John; Dempsey, Kyle E; Raja, Ali S

    2016-06-01

    Value in emergency medicine is determined by both patient-important outcomes and the costs associated with achieving them. However, measuring true costs is challenging. Without an understanding of costs, emergency department (ED) leaders will be unable to determine which interventions might improve value for their patients. Although ongoing research may determine which outcomes are meaningful, an accurate costing system is also needed. This article reviews current costing mechanisms in the ED and their pitfalls. It then describes how time-driven activity-based costing may be superior to these current costing systems. Time-driven activity-based costing, in addition to being a more accurate costing system, can be used for process improvements in the ED. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  19. PhoneGap 3 beginner's guide

    CERN Document Server

    Natili, Giorgio

    2013-01-01

    Written in a friendly, example-driven Beginner's Guide format, there are plenty of step-by-step instructions to help you get started with PhoneGap.If you are a web developer or mobile application developer interested in an examples-based approach to learning mobile application development basics with PhoneGap, then this book is for you.

  20. Real-Time Imaging of Gap Progress during and after Composite Polymerization.

    Science.gov (United States)

    Hayashi, J; Shimada, Y; Tagami, J; Sumi, Y; Sadr, A

    2017-08-01

    The aims of this study were to observe the behavior of composite and formation of gaps during and immediately after light polymerization using swept source optical coherence tomography (OCT) and to compare the interfacial integrity of adhesives in cavities through 3-dimensional (3D) image analysis. Forty tapered cylindrical cavities (4-mm diameter, 2-mm depth) were prepared in bovine incisors and restored using Bond Force (BF), Scotchbond Universal Adhesive (SBU), OptiBond XTR (XTR), or Clearfil SE Bond 2 (SE2), followed by Estelite Flow Quick flowable composite. Real-time imaging was performed at the center of restoration by the OCT system (laser center wavelength: 1,330 nm; frequency: 30 KHz) during and up to 10 min after light curing. The 3D scanning was performed 0, 1, 3, 5, and 10 min after light curing. The percentages of sealed enamel and dentin interface area (E%, D%) were calculated using Amira software. In real-time videos, the initial gaps appeared as a bright scattered area mainly on dentin floor and rapidly progressed along the cavity floor. The timing, rate, and extent of gap formation were different among the specimens. From 3D visualization, gap progress could be seen on both enamel and dentin even after irradiation; furthermore, typical toroidal gap patterns appeared at the dentin floor of BF and SBU. XTR and SE2 showed nearly perfect sealing performance on the dentin floor up to the 10 min that images were recorded. From quantitative analysis, SE2 and XTR showed significantly higher E% and D% than other groups. SBU showed the smallest E% and BF showed a significantly smaller D% than other groups ( P composite placement and 3D quantification of interfacial gaps were implemented within the experimental limitations. Interfacial gap formation during polymerization of the composite depended on the adhesive system used. The formed gaps continued to propagate after composite light curing finished.

  1. Documentation Driven Development for Complex Real-Time Systems

    Science.gov (United States)

    2004-12-01

    This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real

  2. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  3. The genetic variance but not the genetic covariance of life-history traits changes towards the north in a time-constrained insect.

    Science.gov (United States)

    Sniegula, Szymon; Golab, Maria J; Drobniak, Szymon M; Johansson, Frank

    2018-03-22

    Seasonal time constraints are usually stronger at higher than lower latitudes and can exert strong selection on life-history traits and the correlations among these traits. To predict the response of life-history traits to environmental change along a latitudinal gradient, information must be obtained about genetic variance in traits and also genetic correlation between traits, that is the genetic variance-covariance matrix, G. Here, we estimated G for key life-history traits in an obligate univoltine damselfly that faces seasonal time constraints. We exposed populations to simulated native temperatures and photoperiods and common garden environmental conditions in a laboratory set-up. Despite differences in genetic variance in these traits between populations (lower variance at northern latitudes), there was no evidence for latitude-specific covariance of the life-history traits. At simulated native conditions, all populations showed strong genetic and phenotypic correlations between traits that shaped growth and development. The variance-covariance matrix changed considerably when populations were exposed to common garden conditions compared with the simulated natural conditions, showing the importance of environmentally induced changes in multivariate genetic structure. Our results highlight the importance of estimating variance-covariance matrixes in environments that mimic selection pressures and not only trait variances or mean trait values in common garden conditions for understanding the trait evolution across populations and environments. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  4. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  5. Gap junctions mediate large-scale Turing structures in a mean-field cortex driven by subcortical noise

    Science.gov (United States)

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Wilson, M. T.; Sleigh, J. W.

    2007-07-01

    One of the grand puzzles in neuroscience is establishing the link between cognition and the disparate patterns of spontaneous and task-induced brain activity that can be measured clinically using a wide range of detection modalities such as scalp electrodes and imaging tomography. High-level brain function is not a single-neuron property, yet emerges as a cooperative phenomenon of multiply-interacting populations of neurons. Therefore a fruitful modeling approach is to picture the cerebral cortex as a continuum characterized by parameters that have been averaged over a small volume of cortical tissue. Such mean-field cortical models have been used to investigate gross patterns of brain behavior such as anesthesia, the cycles of natural sleep, memory and erasure in slow-wave sleep, and epilepsy. There is persuasive and accumulating evidence that direct gap-junction connections between inhibitory neurons promote synchronous oscillatory behavior both locally and across distances of some centimeters, but, to date, continuum models have ignored gap-junction connectivity. In this paper we employ simple mean-field arguments to derive an expression for D2 , the diffusive coupling strength arising from gap-junction connections between inhibitory neurons. Using recent neurophysiological measurements reported by Fukuda [J. Neurosci. 26, 3434 (2006)], we estimate an upper limit of D2≈0.6cm2 . We apply a linear stability analysis to a standard mean-field cortical model, augmented with gap-junction diffusion, and find this value for the diffusive coupling strength to be close to the critical value required to destabilize the homogeneous steady state. Computer simulations demonstrate that larger values of D2 cause the noise-driven model cortex to spontaneously crystalize into random mazelike Turing structures: centimeter-scale spatial patterns in which regions of high-firing activity are intermixed with regions of low-firing activity. These structures are consistent with the

  6. Low Variance Couplings for Stochastic Models of Intracellular Processes with Time-Dependent Rate Functions.

    Science.gov (United States)

    Anderson, David F; Yuan, Chaojie

    2018-04-18

    A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.

  7. Experimental Evidence of the Knowledge Gap: Message Arousal, Motivation, and Time Delay

    Science.gov (United States)

    Grabe, Maria Elizabeth; Yegiyan, Narine; Kamhawi, Rasha

    2008-01-01

    This study experimentally tested the knowledge gap from an information processing perspective. Specifically, knowledge acquisition was investigated under conditions of medium and low news message arousal, with time delay. Results show the persistence of a knowledge gap, particularly for low arousing messages. In fact, at low levels of message…

  8. Experimental investigations of argon spark gap recovery times by developing a high voltage double pulse generator.

    Science.gov (United States)

    Reddy, C S; Patel, A S; Naresh, P; Sharma, Archana; Mittal, K C

    2014-06-01

    The voltage recovery in a spark gap for repetitive switching has been a long research interest. A two-pulse technique is used to determine the voltage recovery times of gas spark gap switch with argon gas. First pulse is applied to the spark gap to over-volt the gap and initiate the breakdown and second pulse is used to determine the recovery voltage of the gap. A pulse transformer based double pulse generator capable of generating 40 kV peak pulses with rise time of 300 ns and 1.5 μs FWHM and with a delay of 10 μs-1 s was developed. A matrix transformer topology is used to get fast rise times by reducing L(l)C(d) product in the circuit. Recovery Experiments have been conducted for 2 mm, 3 mm, and 4 mm gap length with 0-2 bars pressure for argon gas. Electrodes of a sparkgap chamber are of rogowsky profile type, made up of stainless steel material, and thickness of 15 mm are used in the recovery study. The variation in the distance and pressure effects the recovery rate of the spark gap. An intermediate plateu is observed in the spark gap recovery curves. Recovery time decreases with increase in pressure and shorter gaps in length are recovering faster than longer gaps.

  9. Preliminary test of 5-gap glass multi-gap resistive plate chamber for photon detection for time of flight positron emission tomography (TOF-PET) imaging

    International Nuclear Information System (INIS)

    Ganai, R.; Mondal, M.; Mehta, S.; Chattopadhyay, S.

    2016-01-01

    Multi-gap Resistive Plate Chamber (MRPC) is a type of gas detector which uses constant and uniform electric field in between several high resistive electrodes and works on the principle of gas ionisation. In MRPC a particular gas gap is divided into several parts with the help of thin high resistive electrodes. Division of the gas gap helps to improve the time resolution of the detector significantly. MRPCs with time resolution of ∼15 ps have been reported

  10. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  11. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  12. Nonequilibrium steady states and resonant tunneling in time-periodically driven systems with interactions

    Science.gov (United States)

    Qin, Tao; Hofstetter, Walter

    2018-03-01

    Time-periodically driven systems are a versatile toolbox for realizing interesting effective Hamiltonians. Heating, caused by excitations to high-energy states, is a challenge for experiments. While most setups so far address the relatively weakly interacting regime, it is of general interest to study heating in strongly correlated systems. Using Floquet dynamical mean-field theory, we study nonequilibrium steady states (NESS) in the Falicov-Kimball model, with time-periodically driven kinetic energy or interaction. We systematically investigate the nonequilibrium properties of the NESS. For a driven kinetic energy, we show that resonant tunneling, where the interaction is an integer multiple of the driving frequency, plays an important role in the heating. In the strongly correlated regime, we show that this can be well understood using Fermi's golden rule and the Schrieffer-Wolff transformation for a time-periodically driven system. We furthermore demonstrate that resonant tunneling can be used to control the population of Floquet states to achieve "photodoping." For driven interactions introduced by an oscillating magnetic field near a widely adopted Feshbach resonance, we find that the double occupancy is strongly modulated. Our calculations apply to shaken ultracold-atom systems and to solid-state systems in a spatially uniform but time-dependent electric field. They are also closely related to lattice modulation spectroscopy. Our calculations are helpful to understand the latest experiments on strongly correlated Floquet systems.

  13. Damping-free collective oscillations of a driven two-component Bose gas in optical lattices

    Science.gov (United States)

    Shchedrin, Gavriil; Jaschke, Daniel; Carr, Lincoln D.

    2018-04-01

    We explore the quantum many-body physics of a driven Bose-Einstein condensate in optical lattices. The laser field induces a gap in the generalized Bogoliubov spectrum proportional to the effective Rabi frequency. The lowest-lying modes in a driven condensate are characterized by zero group velocity and nonzero current. Thus, the laser field induces roton modes, which carry interaction in a driven condensate. We show that collective excitations below the energy of the laser-induced gap remain undamped, while above the gap they are characterized by a significantly suppressed Landau damping rate.

  14. Timing-Driven-Testable Convergent Tree Adders

    Directory of Open Access Journals (Sweden)

    Johnnie A. Huang

    2002-01-01

    Full Text Available Carry lookahead adders have been, over the years, implemented in complex arithmetic units due to their regular structure which leads to efficient VLSI implementation for fast adders. In this paper, timing-driven testability synthesis is first performed on a tree adder. It is shown that the structure of the tree adder provides for a high fanout with an imbalanced tree structure, which likely contributes to a racing effect and increases the delay of the circuit. The timing optimization is then realized by reducing the maximum fanout of the adder and by balancing the tree circuit. For a 56-b testable tree adder, the optimization produces a 6.37%increase in speed of the critical path while only contributing a 2.16% area overhead. The full testability of the circuit is achieved in the optimized adder design.

  15. Climate-driven seasonal geocenter motion during the GRACE period

    Science.gov (United States)

    Zhang, Hongyue; Sun, Yu

    2018-03-01

    Annual cycles in the geocenter motion time series are primarily driven by mass changes in the Earth's hydrologic system, which includes land hydrology, atmosphere, and oceans. Seasonal variations of the geocenter motion have been reliably determined according to Sun et al. (J Geophys Res Solid Earth 121(11):8352-8370, 2016) by combining the Gravity Recovery And Climate Experiment (GRACE) data with an ocean model output. In this study, we reconstructed the observed seasonal geocenter motion with geophysical model predictions of mass variations in the polar ice sheets, continental glaciers, terrestrial water storage (TWS), and atmosphere and dynamic ocean (AO). The reconstructed geocenter motion time series is shown to be in close agreement with the solution based on GRACE data supporting with an ocean bottom pressure model. Over 85% of the observed geocenter motion time series, variance can be explained by the reconstructed solution, which allows a further investigation of the driving mechanisms. We then demonstrated that AO component accounts for 54, 62, and 25% of the observed geocenter motion variances in the X, Y, and Z directions, respectively. The TWS component alone explains 42, 32, and 39% of the observed variances. The net mass changes over oceans together with self-attraction and loading effects also contribute significantly (about 30%) to the seasonal geocenter motion in the X and Z directions. Other contributing sources, on the other hand, have marginal (less than 10%) impact on the seasonal variations but introduce a linear trend in the time series.

  16. Psychosocial correlates of gap time to anabolic-androgenic steroid use.

    Science.gov (United States)

    Klimek, Patrycja; Hildebrandt, Tom

    2018-03-15

    Theoretically, legal supplement use precedes and increases the risk for illicit appearance and performance enhancing drug (APED) use-also referred to as the gateway hypothesis. Little is known about associations between the speed of progression, or gap time, from legal to illicit APED use, and psychological risk factors, such as sociocultural influence, eating disorders, body image disturbance, and impulsivity. The sample taken from two studies included 172 active steroid users (n = 143) and intense-exercising healthy controls (n = 29) between the ages of 18 and 60 (M = 34.16, SD = 10.43), the majority of whom were male (91.9%). Participants, retrospectively, reported APED use and completed measures assessing psychological and behavioral factors, including eating concern, muscle dysmorphia, and impulsivity. Participants had a gap time from initial APED use to anabolic-androgenic steroid (AAS) use that ranged from 0 to 38 years. Continuous survival analysis indicated that interactions between self- versus other sociocultural influence on APED onset and both higher eating concern and impulsivity are associated with a shorter gap time from initial legal to illicit APED use. The results indicate the potential value in developing different strategies for individuals with other sociocultural versus self-influence on illicit APED use, and among more impulsive and eating-concerned APED users. Future research is needed to assess different trajectories of APED use, such that eating-concerned and impulsive individuals who perceive less other sociocultural influence may be at greatest risk for a speedier progression to AAS use. © 2018 Wiley Periodicals, Inc.

  17. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  18. Cusping, transport and variance of solutions to generalized Fokker-Planck equations

    Science.gov (United States)

    Carnaffan, Sean; Kawai, Reiichiro

    2017-06-01

    We study properties of solutions to generalized Fokker-Planck equations through the lens of the probability density functions of anomalous diffusion processes. In particular, we examine solutions in terms of their cusping, travelling wave behaviours, and variance, within the framework of stochastic representations of generalized Fokker-Planck equations. We give our analysis in the cases of anomalous diffusion driven by the inverses of the stable, tempered stable and gamma subordinators, demonstrating the impact of changing the distribution of waiting times in the underlying anomalous diffusion model. We also analyse the cases where the underlying anomalous diffusion contains a Lévy jump component in the parent process, and when a diffusion process is time changed by an uninverted Lévy subordinator. On the whole, we present a combination of four criteria which serve as a theoretical basis for model selection, statistical inference and predictions for physical experiments on anomalously diffusing systems. We discuss possible applications in physical experiments, including, with reference to specific examples, the potential for model misclassification and how combinations of our four criteria may be used to overcome this issue.

  19. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  20. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  1. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  2. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  3. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  4. Comparing the accuracy of ABC and time-driven ABC in complex and dynamic environments: a simulation analysis

    OpenAIRE

    S. HOOZÉE; M. VANHOUCKE; W. BRUGGEMAN; -

    2010-01-01

    This paper compares the accuracy of traditional ABC and time-driven ABC in complex and dynamic environments through simulation analysis. First, when unit times in time-driven ABC are known or can be flawlessly estimated, time-driven ABC coincides with the benchmark system and in this case our results show that the overall accuracy of traditional ABC depends on (1) existing capacity utilization, (2) diversity in the actual mix of productive work, and (3) error in the estimated percentage mix. ...

  5. Kinetics-Driven Superconducting Gap in Underdoped Cuprate Superconductors Within the Strong-Coupling Limit

    Directory of Open Access Journals (Sweden)

    Yucel Yildirim

    2011-09-01

    Full Text Available A generic theory of the quasiparticle superconducting gap in underdoped cuprates is derived in the strong-coupling limit, and found to describe the experimental “second gap” in absolute scale. In drastic contrast to the standard pairing gap associated with Bogoliubov quasiparticle excitations, the quasiparticle gap is shown to originate from anomalous kinetic (scattering processes, with a size unrelated to the pairing strength. Consequently, the k dependence of the gap deviates significantly from the pure d_{x^{2}-y^{2}} wave of the order parameter. Our study reveals a new paradigm for the nature of the superconducting gap, and is expected to reconcile numerous apparent contradictions among existing experiments and point toward a more coherent understanding of high-temperature superconductivity.

  6. Properties of a six-gap timing resistive plate chamber with strip readout

    International Nuclear Information System (INIS)

    Ammosov, V.V.; Gapienko, V.A.; Semak, A.A.; Sviridov, Yu.M.; Zaets, V.G.; Gavrishchuk, O.P.; Kuz'min, N.A.; Sychkov, S.Ya.; Usenko, E.A.; Yukaev, A.I.

    2009-01-01

    Six-gap glass timing resistive plate chamber with strip readout was tested using IHEP U-70 PS test beam. The time resolution of ∼ 45 ps at efficiency larger than 98% was achieved. Position resolution along strip was estimated to be ∼1 cm

  7. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  8. A technique for filling gaps in time series with complicated power spectra

    International Nuclear Information System (INIS)

    Brown, T.M.

    1984-01-01

    Fahlman and Ulrych (1982) describe a method for estimating the power and phase spectra of gapped time series, using a maximum-entropy reconstruction of the data in the gaps. It has proved difficult to apply this technique to solar oscillations data, because of the great complexity of the solar oscillations spectrum. We describe a means for avoiding this difficulty, and report the results of a series of blind tests of the modified technique. The main results of these tests are: 1. Gap-filling gives good results, provided that the signal-to-noise ration in the original data is large enough, and provided the gaps are short enough. For low-noise data, the duty cycle of the observations should not be less than about 50%. 2. The frequencies and widths of narrow spectrum features are well reproduced by the technique. 3. The technique systematically reduces the apparent amplitudes of small features in the spectrum relative to large ones. (orig.)

  9. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  10. Part-time wage-gap in Germany: Evidence across the wage distribution

    OpenAIRE

    Tõnurist, Piret; Pavlopoulos, D.

    2013-01-01

    This paper uses insights from labour-market segmentation theory to investigate the wage differences between part-time and full-time workers in Germany at different parts of the wage distribution. This is accomplished with the use of a quintile regression and panel data from the SOEP (1991-2008). To get more insight on the part-time wage-gap, we apply a counterfactual wage decomposition analysis. The results show that, in the lower end of the wage distribution, part-time workers receive lower ...

  11. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  12. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  13. Real-time QRS detection using integrated variance for ECG gated cardiac MRI

    Directory of Open Access Journals (Sweden)

    Schmidt Marcus

    2016-09-01

    Full Text Available During magnetic resonance imaging (MRI, a patient’s vital signs are required for different purposes. In cardiac MRI (CMR, an electrocardiogram (ECG of the patient is required for triggering the image acquisition process. However, a reliable QRS detection of an ECG signal acquired inside an MRI scanner is a challenging task due to the magnetohydrodynamic (MHD effect which interferes with the ECG. The aim of this work was to develop a reliable QRS detector usable inside the MRI which also fulfills the standards for medical devices (IEC 60601-2-27. Therefore, a novel real-time QRS detector based on integrated variance measurements is presented. The algorithm was trained on ANSI/AAMI EC13 test waveforms and was then applied to two databases with 12-lead ECG signals recorded inside and outside an MRI scanner. Reliable results for both databases were achieved for the ECG signals recorded inside (DBMRI: sensitivity Se = 99.94%, positive predictive value +P = 99.84% and outside (DBInCarT: Se = 99.29%, +P = 99.72% the MRI. Due to the accurate R-peak detection in real-time this can be used for monitoring and triggering in MRI exams.

  14. Spark gap produced plasma diagnostics

    International Nuclear Information System (INIS)

    Chang, H.Y.

    1990-01-01

    A Spark Gap (Applied voltage : 2-8KV, Capacitor : 4 Micro F. Dia of the tube : 1 inch, Electrode distance : .3 ∼.5 inch) was made to generate a small size dynamic plasma. To measure the plasma density and temperature as a function of time and position, we installed and have been installing four detection systems - Mach-Zehnder type Interferometer for the plasma refractivity, Expansion speed detector using two He-Ne laser beams, Image Processing using Lens and A Optical-Fiber Array for Pointwise Radiation Sensing, Faraday Rotation of a Optical Fiber to measure the azimuthal component of B-field generated by the plasma drift. These systems was used for the wire explosion diagnostics, and can be used for the Laser driven plasma also

  15. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  16. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  17. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  18. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  19. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  20. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  1. Dynamic response functions, helical gaps, and fractional charges in quantum wires

    Science.gov (United States)

    Meng, Tobias; Pedder, Christopher J.; Tiwari, Rakesh P.; Schmidt, Thomas L.

    We show how experimentally accessible dynamic response functions can discriminate between helical gaps due to magnetic field, and helical gaps driven by electron-electron interactions (''umklapp gaps''). The latter are interesting since they feature gapped quasiparticles of fractional charge e / 2 , and - when coupled to a standard superconductor - an 8 π-Josephson effect and topological zero energy states bound to interfaces. National Research Fund, Luxembourg (ATTRACT 7556175), Deutsche Forschungsgemeinschaft (GRK 1621 and SFB 1143), Swiss National Science Foundation.

  2. Application of a CADIS-like variance reduction technique to electron transport

    International Nuclear Information System (INIS)

    Dionne, B.; Haghighat, A.

    2004-01-01

    This paper studies the use of approximate deterministic importance functions to calculate the lower-weight bounds of the MCNP5 weight-window variance reduction technique when applied to electron transport simulations. This approach follows the CADIS (Consistent Adjoint Driven Importance Sampling) methodology developed for neutral particles shielding calculations. The importance functions are calculated using the one-dimensional CEPXS/ONELD code package. Considering a simple 1-D problem, this paper shows that our methodology can produce speedups up to ∼82 using an approximate electron importance function distributions computed in ∼8 seconds. (author)

  3. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  4. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  5. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    Science.gov (United States)

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  6. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  7. Leveraging Simulation Against the F-16 Flying Training Gap

    National Research Council Canada - National Science Library

    McGrath, Shaun R

    2005-01-01

    .... Therefore, this myriad of constraints and restraints further hamstrings the peacetime mission essential competencies training gap driven in large part by concerns for personnel, equipment, and environmental safety...

  8. [Analysis of cost and efficiency of a medical nursing unit using time-driven activity-based costing].

    Science.gov (United States)

    Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi

    2011-08-01

    Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.

  9. A new costing model in hospital management: time-driven activity-based costing system.

    Science.gov (United States)

    Öker, Figen; Özyapıcı, Hasan

    2013-01-01

    Traditional cost systems cause cost distortions because they cannot meet the requirements of today's businesses. Therefore, a new and more effective cost system is needed. Consequently, time-driven activity-based costing system has emerged. The unit cost of supplying capacity and the time needed to perform an activity are the only 2 factors considered by the system. Furthermore, this system determines unused capacity by considering practical capacity. The purpose of this article is to emphasize the efficiency of the time-driven activity-based costing system and to display how it can be applied in a health care institution. A case study was conducted in a private hospital in Cyprus. Interviews and direct observations were used to collect the data. The case study revealed that the cost of unused capacity is allocated to both open and laparoscopic (closed) surgeries. Thus, by using the time-driven activity-based costing system, managers should eliminate the cost of unused capacity so as to obtain better results. Based on the results of the study, hospital management is better able to understand the costs of different surgeries. In addition, managers can easily notice the cost of unused capacity and decide how many employees to be dismissed or directed to other productive areas.

  10. The Structure of the Temp Wage Gap in Slack Labor Markets

    DEFF Research Database (Denmark)

    Jahn, Elke

    As a consequence of the rapid growth of temporary agency employment in Germa-ny the debate on the remuneration of temporary agency workers has intensified recently. The study finds that the temp wage gap in Germany is indeed large. Decomposition reveals that the gap is mainly driven by difference...

  11. Improving Efficiency Using Time-Driven Activity-Based Costing Methodology.

    Science.gov (United States)

    Tibor, Laura C; Schultz, Stacy R; Menaker, Ronald; Weber, Bradley D; Ness, Jay; Smith, Paula; Young, Phillip M

    2017-03-01

    The aim of this study was to increase efficiency in MR enterography using a time-driven activity-based costing methodology. In February 2015, a multidisciplinary team was formed to identify the personnel, equipment, space, and supply costs of providing outpatient MR enterography. The team mapped the current state, completed observations, performed timings, and calculated costs associated with each element of the process. The team used Pareto charts to understand the highest cost and most time-consuming activities, brainstormed opportunities, and assessed impact. Plan-do-study-act cycles were developed to test the changes, and run charts were used to monitor progress. The process changes consisted of revising the workflow associated with the preparation and administration of glucagon, with completed implementation in November 2015. The time-driven activity-based costing methodology allowed the radiology department to develop a process to more accurately identify the costs of providing MR enterography. The primary process modification was reassigning responsibility for the administration of glucagon from nurses to technologists. After implementation, the improvements demonstrated success by reducing non-value-added steps and cost by 13%, staff time by 16%, and patient process time by 17%. The saved process time was used to augment existing examination time slots to more accurately accommodate the entire enterographic examination. Anecdotal comments were captured to validate improved staff satisfaction within the multidisciplinary team. This process provided a successful outcome to address daily workflow frustrations that could not previously be improved. A multidisciplinary team was necessary to achieve success, in addition to the use of a structured problem-solving approach. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  12. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  13. Semantic Web and Model-Driven Engineering

    CERN Document Server

    Parreiras, Fernando S

    2012-01-01

    The next enterprise computing era will rely on the synergy between both technologies: semantic web and model-driven software development (MDSD). The semantic web organizes system knowledge in conceptual domains according to its meaning. It addresses various enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven software development is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and se

  14. Consistency properties of chaotic systems driven by time-delayed feedback

    Science.gov (United States)

    Jüngling, T.; Soriano, M. C.; Oliver, N.; Porte, X.; Fischer, I.

    2018-04-01

    Consistency refers to the property of an externally driven dynamical system to respond in similar ways to similar inputs. In a delay system, the delayed feedback can be considered as an external drive to the undelayed subsystem. We analyze the degree of consistency in a generic chaotic system with delayed feedback by means of the auxiliary system approach. In this scheme an identical copy of the nonlinear node is driven by exactly the same signal as the original, allowing us to verify complete consistency via complete synchronization. In the past, the phenomenon of synchronization in delay-coupled chaotic systems has been widely studied using correlation functions. Here, we analytically derive relationships between characteristic signatures of the correlation functions in such systems and unequivocally relate them to the degree of consistency. The analytical framework is illustrated and supported by numerical calculations of the logistic map with delayed feedback for different replica configurations. We further apply the formalism to time series from an experiment based on a semiconductor laser with a double fiber-optical feedback loop. The experiment constitutes a high-quality replica scheme for studying consistency of the delay-driven laser and confirms the general theoretical results.

  15. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  16. The effects of in-vehicle tasks and time-gap selection while reclaiming control from adaptive cruise control (ACC) with bus simulator.

    Science.gov (United States)

    Lin, Tsang-Wei; Hwang, Sheue-Ling; Su, Jau-Ming; Chen, Wan-Hui

    2008-05-01

    This research aimed to find out the effects of in-vehicle distractions and time-gap settings with a fix-based bus driving simulator in a following scenario. Professional bus drivers were recruited to perform in-vehicle tasks while driving with adaptive cruise control (ACC) of changeable time-gap settings in freeway traffic. Thirty subjects were divided equally into three groups for different in-vehicle task modes (between subjects), including no task distraction, hands-free, and manual modes. Further, time-gap settings for the experimental ACC were: shorter than 1.0 s, 1.0-1.5 s, 1.5-2.0 s, and longer than 2.0 s (within subjects). Longitudinal (mean headway, forward collision rate, and response time) and lateral control (mean lateral lane position and its standard deviation) performance was assessed. In the results, longitudinal control performance was worsened by both shorter time-gaps and heavier in-vehicle tasks. But the interaction indicated that the harm by heavier in-vehicle distraction could be improved by longer time-gaps. As for the lateral control, it would only be negatively affected by shorter time-gap settings. This research indicates the effects of time-gaps and in-vehicle distraction, as well as the interaction. Proper time-gap selection under different in-vehicle distractions can help avoid accidents and keep safe.

  17. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  18. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  19. Endogenous implementation of technology gap in energy optimization models-a systematic analysis within TIMES G5 model

    International Nuclear Information System (INIS)

    Rout, Ullash K.; Fahl, Ulrich; Remme, Uwe; Blesl, Markus; Voss, Alfred

    2009-01-01

    Evaluation of global diffusion potential of learning technologies and their timely specific cost development across regions is always a challenging issue for the future technology policy preparation. Further the process of evaluation gains interest especially by endogenous treatment of energy technologies under uncertainty in learning rates with technology gap across the regions in global regional cluster learning approach. This work devised, implemented, and examined new methodologies on technology gaps (a practical problem), using two broad concepts of knowledge deficit and time lag approaches in global learning, applying the floor cost approach methodology. The study was executed in a multi-regional, technology-rich and long horizon bottom-up linear energy system model on The Integrated MARKAL EFOM System (TIMES) framework. Global learning selects highest learning technologies in maximum uncertainty of learning rate scenario, whereas any form of technology gap retards the global learning process and discourages the technologies deployment. Time lag notions of technology gaps prefer heavy utilization of learning technologies in developed economies for early reduction of specific cost. Technology gaps of any kind should be reduced among economies through the promotion and enactment of various policies by governments, in order to utilize the technological resources by mass deployment to combat ongoing climate change.

  20. Presidential inability: Filling in the gaps.

    Science.gov (United States)

    Feerick, John D

    2014-01-01

    This article focuses on potential gaps caused by the absence from the Twenty-Fifth Amendment of provisions to deal with the disability of a Vice President and the omission from the statutory line of succession law of provisions comparable to Sections 3 and 4 of the Twenty-Fifth Amendment for when there is an able Vice President. The analysis offers a critical review of the latent ambiguities in the succession provision to the United States Constitution, noting problems that have arisen from the time of the Constitutional Convention, to John Tyler's accession to office, to numerous disability crises that presented themselves throughout the twentieth century, to the present day. As the world becomes more complex and threats to the presidency more common, continued examination of our succession structure and its adequacy for establishing clear and effective presidential succession provisions under a broad range of circumstances is of paramount concern. This article embraces this robust discussion by offering some suggestions for improving the system in a way that does not require a constitutional amendment. The first part of the analysis traces the events that have driven the development of the nation's succession procedures. The second part examines the inadequacies, or "gaps," that remain in the area of presidential inability, and the third part sets forth recommendations for resolving these gaps.

  1. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Li Ding

    2018-01-01

    Full Text Available In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. Finally, we give three examples to demonstrate the applicability of our obtained results.

  2. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  3. Swiss and Dutch "consumer-driven health care": ideal model or reality?

    Science.gov (United States)

    Okma, Kieke G H; Crivelli, Luca

    2013-02-01

    This article addresses three topics. First, it reports on the international interest in the health care reforms of Switzerland and The Netherlands in the 1990s and early 2000s that operate under the label "managed competition" or "consumer-driven health care." Second, the article reviews the behavior assumptions that make plausible the case for the model of "managed competition." Third, it analyze the actual reform experience of Switzerland and Holland to assess to what extent they confirm the validity of those assumptions. The article concludes that there is a triple gap in understanding of those topics: a gap between the theoretical model of managed competition and the reforms as implemented in both Switzerland and The Netherlands; second, a gap between the expectations of policy-makers and the results of the reforms, and third, a gap between reform outcomes and the observations of external commentators that have embraced the reforms as the ultimate success of "consumer-driven health care." The article concludes with a discussion of the implications of this "triple gap". Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  5. Variance in population firing rate as a measure of slow time-scale correlation

    Directory of Open Access Journals (Sweden)

    Adam C. Snyder

    2013-12-01

    Full Text Available Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research.

  6. Air-Flow-Driven Triboelectric Nanogenerators for Self-Powered Real-Time Respiratory Monitoring.

    Science.gov (United States)

    Wang, Meng; Zhang, Jiahao; Tang, Yingjie; Li, Jun; Zhang, Baosen; Liang, Erjun; Mao, Yanchao; Wang, Xudong

    2018-06-04

    Respiration is one of the most important vital signs of humans, and respiratory monitoring plays an important role in physical health management. A low-cost and convenient real-time respiratory monitoring system is extremely desirable. In this work, we demonstrated an air-flow-driven triboelectric nanogenerator (TENG) for self-powered real-time respiratory monitoring by converting mechanical energy of human respiration into electric output signals. The operation of the TENG was based on the air-flow-driven vibration of a flexible nanostructured polytetrafluoroethylene (n-PTFE) thin film in an acrylic tube. This TENG can generate distinct real-time electric signals when exposed to the air flow from different breath behaviors. It was also found that the accumulative charge transferred in breath sensing corresponds well to the total volume of air exchanged during the respiration process. Based on this TENG device, an intelligent wireless respiratory monitoring and alert system was further developed, which used the TENG signal to directly trigger a wireless alarm or dial a cell phone to provide timely alerts in response to breath behavior changes. This research offers a promising solution for developing self-powered real-time respiratory monitoring devices.

  7. Using time-driven activity-based costing to identify value improvement opportunities in healthcare.

    Science.gov (United States)

    Kaplan, Robert S; Witkowski, Mary; Abbott, Megan; Guzman, Alexis Barboza; Higgins, Laurence D; Meara, John G; Padden, Erin; Shah, Apurva S; Waters, Peter; Weidemeier, Marco; Wertheimer, Sam; Feeley, Thomas W

    2014-01-01

    As healthcare providers cope with pricing pressures and increased accountability for performance, they should be rededicating themselves to improving the value they deliver to their patients: better outcomes and lower costs. Time-driven activity-based costing offers the potential for clinicians to redesign their care processes toward that end. This costing approach, however, is new to healthcare and has not yet been systematically implemented and evaluated. This article describes early time-driven activity-based costing work at several leading healthcare organizations in the United States and Europe. It identifies the opportunities they found to improve value for patients and demonstrates how this costing method can serve as the foundation for new bundled payment reimbursement approaches.

  8. Public-Elite Gap on European Integration : The Missing Link between Discourses about EU Enlargement among Citizens and Elites in Serbia

    NARCIS (Netherlands)

    Kortenska, E.G.; Sircar, I.; Steunenberg, B.

    2016-01-01

    Enlargement is often regarded as an elite-driven process, which does not or not sufficiently include the views of ordinary citizens. At the same time, support for integration has been eroding over the last decade, contributing to a public-elite gap in preferences towards the process. This paper

  9. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Current density waves in open mesoscopic rings driven by time-periodic magnetic fluxes

    International Nuclear Information System (INIS)

    Yan Conghua; Wei Lianfu

    2010-01-01

    Quantum coherent transport through open mesoscopic Aharonov-Bohm rings (driven by static fluxes) have been studied extensively. Here, by using quantum waveguide theory and the Floquet theorem we investigate the quantum transport of electrons along an open mesoscopic ring threaded by a time-periodic magnetic flux. We predicate that current density waves could be excited along such an open ring. As a consequence, a net current could be generated along the lead with only one reservoir, if the lead additionally connects to such a normal-metal loop driven by the time-dependent flux. These phenomena could be explained by photon-assisted processes, due to the interaction between the transported electrons and the applied oscillating external fields. We also discuss how the time-average currents (along the ring and the lead) depend on the amplitude and frequency of the applied oscillating fluxes.

  11. Thermodynamics in finite time: A chemically driven engine

    International Nuclear Information System (INIS)

    Ondrechen, M.J.; Berry, R.S.; Andresen, B.

    1980-01-01

    The methods of finite time thermodynamics are applied to processes whose relaxation parameters are chemical rate coefficients within the working fluid. The direct optimization formalism used previously for heat engines with friction and finite heat transfer rates: termed the tricycle method: is extended to heat engines driven by exothermic reactions. The model is a flow reactor coupled by a heat exchanger to an engine. Conditions are established for the achievement of maximum power from such a system. Emphasis is on how the chemical kinetics control the finite-time thermodynamic extrema; first order, first order reversible, and second order reaction kinetics are analyzed. For the types of reactions considered here, there is always a finite positive flow rate in the reactor that yields maximum engine power. Maximum fuel efficiency is always attained in these systems at the uninteresting limit of zero flow rate

  12. Millimeter-Gap Magnetically Insulated Transmission Line Power Flow Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Hutsel, Brian Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stoltzfus, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fowler, William E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LeChien, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mazarakis, Michael G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, James K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mulville, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Savage, Mark E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stygar, William A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McKenney, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Peter A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); MacRunnels, Diego J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Long, Finis W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Porter, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An experiment platform has been designed to study vacuum power flow in magnetically insulated transmission lines (MITLs). The platform was driven by the 400-GW Mykonos-V accelerator. The experiments conducted quantify the current loss in a millimeter-gap MITL with respect to vacuum conditions in the MITL for two different gap distances, 1.0 and 1.3 mm. The current loss for each gap was measured for three different vacuum pump down times. As a ride along experiment, multiple shots were conducted with each set of hardware to determine if there was a conditioning effect to increase current delivery on subsequent shots. The experiment results revealed large differences in performance for the 1.0 and 1.3 mm gaps. The 1.0 mm gap resulted in current loss of 40%-60% of peak current. The 1.3 mm gap resulted in current losses of less than 5% of peak current. Classical MITL models that neglect plasma expansion predict that there should be zero current loss, after magnetic insulation is established, for both of these gaps. The experiments result s indicate that the vacuum pressure or pump down time did not have a significant effect on the measured current loss at vacuum pressures between 1e-4 and 1e-5 Torr. Additionally, there was not repeatable evidence of a conditioning effect that reduced current loss for subsequent full-energy shots on a given set of hardware. It should be noted that the experiments conducted likely did not have large loss contributions due to ion emission from the anode due to the relatively small current densi-ties (25-40 kA/cm) in the MITL that limited the anode temperature rise due to ohmic heating. The results and conclusions from these experiments may have limited applicability to MITLs of high current density (>400 kA/cm) used in the convolute and load region of the Z which experience temperature increases of >400° C and generate ion emission from anode surfaces.

  13. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  14. Additive genetic variance in polyandry enables its evolution, but polyandry is unlikely to evolve through sexy or good sperm processes.

    Science.gov (United States)

    Travers, L M; Simmons, L W; Garcia-Gonzalez, F

    2016-05-01

    Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  15. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  16. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  17. Data-driven modeling and real-time distributed control for energy efficient manufacturing systems

    International Nuclear Information System (INIS)

    Zou, Jing; Chang, Qing; Arinez, Jorge; Xiao, Guoxian

    2017-01-01

    As manufacturers face the challenges of increasing global competition and energy saving requirements, it is imperative to seek out opportunities to reduce energy waste and overall cost. In this paper, a novel data-driven stochastic manufacturing system modeling method is proposed to identify and predict energy saving opportunities and their impact on production. A real-time distributed feedback production control policy, which integrates the current and predicted system performance, is established to improve the overall profit and energy efficiency. A case study is presented to demonstrate the effectiveness of the proposed control policy. - Highlights: • A data-driven stochastic manufacturing system model is proposed. • Real-time system performance and energy saving opportunity identification method is developed. • Prediction method for future potential system performance and energy saving opportunity is developed. • A real-time distributed feedback control policy is established to improve energy efficiency and overall system profit.

  18. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  19. Respondent-driven sampling as Markov chain Monte Carlo.

    Science.gov (United States)

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  20. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  1. Temporal Genetic Variance and Propagule-Driven Genetic Structure Characterize Naturalized Rainbow Trout (Oncorhynchus mykiss) from a Patagonian Lake Impacted by Trout Farming.

    Science.gov (United States)

    Benavente, Javiera N; Seeb, Lisa W; Seeb, James E; Arismendi, Ivan; Hernández, Cristián E; Gajardo, Gonzalo; Galleguillos, Ricardo; Cádiz, Maria I; Musleh, Selim S; Gomez-Uchida, Daniel

    2015-01-01

    Knowledge about the genetic underpinnings of invasions-a theme addressed by invasion genetics as a discipline-is still scarce amid well documented ecological impacts of non-native species on ecosystems of Patagonia in South America. One of the most invasive species in Patagonia's freshwater systems and elsewhere is rainbow trout (Oncorhynchus mykiss). This species was introduced to Chile during the early twentieth century for stocking and promoting recreational fishing; during the late twentieth century was reintroduced for farming purposes and is now naturalized. We used population- and individual-based inference from single nucleotide polymorphisms (SNPs) to illuminate three objectives related to the establishment and naturalization of Rainbow Trout in Lake Llanquihue. This lake has been intensively used for trout farming during the last three decades. Our results emanate from samples collected from five inlet streams over two seasons, winter and spring. First, we found that significant intra- population (temporal) genetic variance was greater than inter-population (spatial) genetic variance, downplaying the importance of spatial divergence during the process of naturalization. Allele frequency differences between cohorts, consistent with variation in fish length between spring and winter collections, might explain temporal genetic differences. Second, individual-based Bayesian clustering suggested that genetic structure within Lake Llanquihue was largely driven by putative farm propagules found at one single stream during spring, but not in winter. This suggests that farm broodstock might migrate upstream to breed during spring at that particular stream. It is unclear whether interbreeding has occurred between "pure" naturalized and farm trout in this and other streams. Third, estimates of the annual number of breeders (Nb) were below 73 in half of the collections, suggestive of genetically small and recently founded populations that might experience substantial

  2. The Timing of a Time Out: The Gap Year in Life Course Context

    Science.gov (United States)

    Vogt, Kristoffer Chelsom

    2018-01-01

    Based on biographical interviews from a three-generation study in Norway, this article examines the place of the contemporary "gap year" within life course transition trajectories and intergenerational relations embedded in wider patterns of social inequality. Under the heading of taking a gap year, young people on "academic…

  3. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  4. Giant modulation of the electronic band gap of carbon nanotubes by dielectric screening

    NARCIS (Netherlands)

    Aspitarte, Lee; McCulley, Daniel R.; Bertoni, Andrea; Island, J.O.; Ostermann, Marvin; Rontani, Massimo; Steele, G.A.; Minot, Ethan D.

    2017-01-01

    Carbon nanotubes (CNTs) are a promising material for high-performance electronics beyond silicon. But unlike silicon, the nature of the transport band gap in CNTs is not fully understood. The transport gap in CNTs is predicted to be strongly driven by electron-electron (e-e) interactions and

  5. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  6. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  7. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  8. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  9. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  10. Can the Introduction of a Minimum Wage in FYR Macedonia Decrease the Gender Wage Gap?

    OpenAIRE

    F. Angel-Urdinola, Diego

    2008-01-01

    This paper relies on a simple framework to understand the gender wage gap in Macedonia, and simulates how the gender wage gap would behave after the introduction of a minimum wage. First, it presents a new - albeit simple - decomposition of the wage gap into three factors: (i) a wage level factor, which measures the extent to which the gender gap is driven by differences in wage levels amo...

  11. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    OpenAIRE

    Xiao-Li Ding; Juan J. Nieto

    2018-01-01

    In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochast...

  12. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  13. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  14. A novel Ka-band coaxial transit-time oscillator with a four-gap buncher

    Energy Technology Data Exchange (ETDEWEB)

    Song, Lili; He, Juntao; Ling, Junpu [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China)

    2015-05-15

    A novel Ka-band coaxial transit-time oscillator (TTO) with a four-gap buncher is proposed and investigated. Simulation results show that an output power of 1.27 GW and a frequency of 26.18 GHz can be achieved with a diode voltage of 447 kV and a beam current of 7.4 kA. The corresponding power efficiency is 38.5%, and the guiding magnetic field is 0.6 T. Studies and analysis indicate that a buncher with four gaps can modulate the electron beam better than the three-gap buncher in such a Ka-band TTO. Moreover, power efficiency increases with the coupling coefficient between the buncher and the extractor. Further simulation demonstrates that power efficiency can reach higher than 30% with a guiding magnetic field of above 0.5 T. Besides, the power efficiency exceeds 30% in a relatively large range of diode voltage from 375 kV to 495 kV.

  15. Bobtail: A Proof-of-Work Target that Minimizes Blockchain Mining Variance (Draft)

    OpenAIRE

    Bissias, George; Levine, Brian Neil

    2017-01-01

    Blockchain systems are designed to produce blocks at a constant average rate. The most popular systems currently employ a Proof of Work (PoW) algorithm as a means of creating these blocks. Bitcoin produces, on average, one block every 10 minutes. An unfortunate limitation of all deployed PoW blockchain systems is that the time between blocks has high variance. For example, 5% of the time, Bitcoin's inter-block time is at least 40 minutes. This variance impedes the consistent flow of validated...

  16. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  17. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  18. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  19. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  20. Interest Rate Risk Management using Duration Gap Methodology

    Directory of Open Access Journals (Sweden)

    Dan Armeanu

    2008-01-01

    Full Text Available The world for financial institutions has changed during the last 20 years, and become riskier and more competitive-driven. After the deregulation of the financial market, banks had to take on extensive risk in order to earn sufficient returns. Interest rate volatility has increased dramatically over the past twenty-five years and for that an efficient management of this interest rate risk is strong required. In the last years banks developed a variety of methods for measuring and managing interest rate risk. From these the most frequently used in real banking life and recommended by Basel Committee are based on: Reprising Model or Funding Gap Model, Maturity Gap Model, Duration Gap Model, Static and Dynamic Simulation. The purpose of this article is to give a good understanding of duration gap model used for managing interest rate risk. The article starts with a overview of interest rate risk and explain how this type of risk should be measured and managed within an asset-liability management. Then the articles takes a short look at methods for measuring interest rate risk and after that explains and demonstrates how can be used Duration Gap Model for managing interest rate risk in banks.The world for financial institutions has changed during the last 20 years, and become riskier and more competitive-driven. After the deregulation of the financial market, banks had to take on extensive risk in order to earn sufficient returns. Interest rate volatility has increased dramatically over the past twenty-five years and for that an efficient management of this interest rate risk is strong required. In the last years banks developed a variety of methods for measuring and managing interest rate risk. From these the most frequently used in real banking life and recommended by Basel Committee are based on: Reprising Model or Funding Gap Model, Maturity Gap Model, Duration Gap Model, Static and Dynamic Simulation. The purpose of this article is to give a

  1. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  2. Using Time-Driven Activity-Based Costing to Implement Change.

    Science.gov (United States)

    Sayed, Ellen N; Laws, Sa'ad; Uthman, Basim

    2017-01-01

    Academic medical libraries have responded to changes in technology, evolving professional roles, reduced budgets, and declining traditional services. Libraries that have taken a proactive role to change have seen their librarians emerge as collaborators and partners with faculty and researchers, while para-professional staff is increasingly overseeing traditional services. This article addresses shifting staff and schedules at a single-service-point information desk by using time-driven activity-based costing to determine the utilization of resources available to provide traditional library services. Opening hours and schedules were changed, allowing librarians to focus on patrons' information needs in their own environment.

  3. Time for Men to Catch up on Women? A Study of the Swedish Gender Wage Gap 1973-2012

    OpenAIRE

    Löfström, Åsa

    2014-01-01

    The Swedish gender wage gap decreased substantially from the 1960s until the beginning of the 1980s. At the same time women had been narrowing men in employment experience and education. While women continued to catch up on men the average wage gap remained almost the same as in the 1980s. The catch-up hypothesis was obviously not the sole answer to the wage-gap. The purpose here was to discuss other factors of relevance for the evolution of the average pay gap. Data for the period 1972-2012 ...

  4. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  5. Gap probability - Measurements and models of a pecan orchard

    Science.gov (United States)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  6. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  7. Mean-variance portfolio optimization with state-dependent risk aversion

    DEFF Research Database (Denmark)

    Bjoerk, Tomas; Murgoci, Agatha; Zhou, Xun Yu

    2014-01-01

    The objective of this paper is to study the mean-variance portfolio optimization in continuous time. Since this problem is time inconsistent we attack it by placing the problem within a game theoretic framework and look for subgame perfect Nash equilibrium strategies. This particular problem has...

  8. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  9. Optical isolation based on space-time engineered asymmetric photonic band gaps

    Science.gov (United States)

    Chamanara, Nima; Taravati, Sajjad; Deck-Léger, Zoé-Lise; Caloz, Christophe

    2017-10-01

    Nonreciprocal electromagnetic devices play a crucial role in modern microwave and optical technologies. Conventional methods for realizing such systems are incompatible with integrated circuits. With recent advances in integrated photonics, the need for efficient on-chip magnetless nonreciprocal devices has become more pressing than ever. This paper leverages space-time engineered asymmetric photonic band gaps to generate optical isolation. It shows that a properly designed space-time modulated slab is highly reflective/transparent for opposite directions of propagation. The corresponding design is magnetless, accommodates low modulation frequencies, and can achieve very high isolation levels. An experimental proof of concept at microwave frequencies is provided.

  10. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  11. Electronic band-gap modified passive silicon optical modulator at telecommunications wavelengths.

    Science.gov (United States)

    Zhang, Rui; Yu, Haohai; Zhang, Huaijin; Liu, Xiangdong; Lu, Qingming; Wang, Jiyang

    2015-11-13

    The silicon optical modulator is considered to be the workhorse of a revolution in communications. In recent years, the capabilities of externally driven active silicon optical modulators have dramatically improved. Self-driven passive modulators, especially passive silicon modulators, possess advantages in compactness, integration, low-cost, etc. Constrained by a large indirect band-gap and sensitivity-related loss, the passive silicon optical modulator is scarce and has been not advancing, especially at telecommunications wavelengths. Here, a passive silicon optical modulator is fabricated by introducing an impurity band in the electronic band-gap, and its nonlinear optics and applications in the telecommunications-wavelength lasers are investigated. The saturable absorption properties at the wavelength of 1.55 μm was measured and indicates that the sample is quite sensitive to light intensity and has negligible absorption loss. With a passive silicon modulator, pulsed lasers were constructed at wavelengths at 1.34 and 1.42 μm. It is concluded that the sensitive self-driven passive silicon optical modulator is a viable candidate for photonics applications out to 2.5 μm.

  12. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  13. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  14. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  15. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  16. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  17. Noise Reduction and Gap Filling of fAPAR Time Series Using an Adapted Local Regression Filter

    Directory of Open Access Journals (Sweden)

    Álvaro Moreno

    2014-08-01

    Full Text Available Time series of remotely sensed data are an important source of information for understanding land cover dynamics. In particular, the fraction of absorbed photosynthetic active radiation (fAPAR is a key variable in the assessment of vegetation primary production over time. However, the fAPAR series derived from polar orbit satellites are not continuous and consistent in space and time. Filtering methods are thus required to fill in gaps and produce high-quality time series. This study proposes an adapted (iteratively reweighted local regression filter (LOESS and performs a benchmarking intercomparison with four popular and generally applicable smoothing methods: Double Logistic (DLOG, smoothing spline (SSP, Interpolation for Data Reconstruction (IDR and adaptive Savitzky-Golay (ASG. This paper evaluates the main advantages and drawbacks of the considered techniques. The results have shown that ASG and the adapted LOESS perform better in recovering fAPAR time series over multiple controlled noisy scenarios. Both methods can robustly reconstruct the fAPAR trajectories, reducing the noise up to 80% in the worst simulation scenario, which might be attributed to the quality control (QC MODIS information incorporated into these filtering algorithms, their flexibility and adaptation to the upper envelope. The adapted LOESS is particularly resistant to outliers. This method clearly outperforms the other considered methods to deal with the high presence of gaps and noise in satellite data records. The low RMSE and biases obtained with the LOESS method (|rMBE| < 8%; rRMSE < 20% reveals an optimal reconstruction even in most extreme situations with long seasonal gaps. An example of application of the LOESS method to fill in invalid values in real MODIS images presenting persistent cloud and snow coverage is also shown. The LOESS approach is recommended in most remote sensing applications, such as gap-filling, cloud-replacement, and observing temporal

  18. Daily Thermal Predictions of the AGR-1 Experiment with Gas Gaps Varying with Time

    Energy Technology Data Exchange (ETDEWEB)

    Grant Hawkes; James Sterbentz; John Maki; Binh Pham

    2012-06-01

    A new daily as-run thermal analysis was performed at the Idaho National Laboratory on the Advanced Gas Reactor (AGR) test experiment number one at the Advanced Test Reactor (ATR). This thermal analysis incorporates gas gaps changing with time during the irradiation experiment. The purpose of this analysis was to calculate the daily average temperatures of each compact to compare with experimental results. Post irradiation examination (PIE) measurements of the graphite holder and fuel compacts showed the gas gaps varying from the beginning of life. The control temperature gas gap and the fuel compact – graphite holder gas gaps were linearly changed from the original fabrication dimensions, to the end of irradiation measurements. A steady-state thermal analysis was performed for each daily calculation. These new thermal predictions more closely match the experimental data taken during the experiment than previous analyses. Results are presented comparing normalized compact average temperatures to normalized log(R/B) Kr-85m. The R/B term is the measured release rate divided by the predicted birth rate for the isotope Kr-85m. Correlations between these two normalized values are presented.

  19. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  20. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  1. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  2. Real-time power angle determination of salient-pole synchronous machine based on air gap measurements

    Energy Technology Data Exchange (ETDEWEB)

    Despalatovic, Marin; Jadric, Martin; Terzic, Bozo [FESB University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, R. Boskovica bb, 21000 Split (Croatia)

    2008-11-15

    This paper presents a new method for the real-time power angle determination of the salient-pole synchronous machines. This method is based on the terminal voltage and air gap measurements, which are the common features of the hydroturbine generator monitoring system. The raw signal of the air gap sensor is used to detect the rotor displacement with reference to the fundamental component of the terminal voltage. First, the algorithm developed for the real-time power angle determination is tested using the synthetic data obtained by the standard machine model simulation. Thereafter, the experimental investigation is carried out on the 26 MVA utility generator. The validity of the method is verified by comparing with another method, which is based on a tooth gear mounted on the rotor shaft. The proposed real-time algorithm has an adequate accuracy and needs a very short processing time. For applications that do not require real-time processing, such as the estimation of the synchronous machine parameters, the accuracy is additionally increased by applying an off-line data-processing algorithm. (author)

  3. Shell gap reduction in neutron-rich N=17 nuclei

    International Nuclear Information System (INIS)

    Obertelli, A.; Gillibert, A.; Alamanos, N.; Alvarez, M.; Auger, F.; Dayras, R.; Drouart, A.; France, G. de; Jurado, B.; Keeley, N.; Lapoux, V.; Mittig, W.; Mougeot, X.; Nalpas, L.; Pakou, A.; Patronis, N.; Pollacco, E.C.; Rejmund, F.; Rejmund, M.; Roussel-Chomaz, P.; Savajols, H.; Skaza, F.; Theisen, Ch.

    2006-01-01

    The spectroscopy of 27 Ne has been investigated through the one-neutron transfer reaction 26 Ne(d,p) 27 Ne in inverse kinematics at 9.7 MeV/nucleon. The results strongly support the existence of a low-lying negative parity state in 27 Ne, which is a signature of a reduced sd-fp shell gap in the N=16 neutron-rich region, at variance with stable nuclei

  4. Extinction Time of a Metapopulation Driven by Colored Correlated Noises

    International Nuclear Information System (INIS)

    Li Jiangcheng

    2010-01-01

    The simplified incidence function model which is driven by the colored correlated noises is employed to investigate the extinction time of a metapopulation perturbed by environments. The approximate Fokker-Planck Equation and the mean first passage time which denotes the extinction time (T ex ) are obtained by virtue of the Novikov theorem and the Fox approach. After introducing a noise intensity ratio and a dimensionless parameter R = D/α (D and α are the multiplicative and additive colored noise intensities respectively), and then performing numerical computations, the results indicate that: (i) The absolute value of correlation strength Λ and its correlation time τ 3 play opposite roles on the T ex ; (ii) For the case of 0 2 play opposite roles on the T ex in which R > 1 is the best condition, and there is one-peak structure on the T ex - D plot; (iii) For the case of -1 1 play opposite roles on the T ex in which R ex - τ 2 plot. (general)

  5. Applying cost accounting to operating room staffing in otolaryngology: time-driven activity-based costing and outpatient adenotonsillectomy.

    Science.gov (United States)

    Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M

    2015-04-01

    (1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  6. A Gap Analysis Needs Assessment Tool to Drive a Care Delivery and Research Agenda for Integration of Care and Sharing of Best Practices Across a Health System.

    Science.gov (United States)

    Golden, Sherita Hill; Hager, Daniel; Gould, Lois J; Mathioudakis, Nestoras; Pronovost, Peter J

    2017-01-01

    In a complex health system, it is important to establish a systematic and data-driven approach to identifying needs. The Diabetes Clinical Community (DCC) of Johns Hopkins Medicine's Armstrong Institute for Patient Safety and Quality developed a gap analysis tool and process to establish the system's current state of inpatient diabetes care. The collectively developed tool assessed the following areas: program infrastructure; protocols, policies, and order sets; patient and health care professional education; and automated data access. For the purposes of this analysis, gaps were defined as those instances in which local resources, infrastructure, or processes demonstrated a variance against the current national evidence base or institutionally defined best practices. Following the gap analysis, members of the DCC, in collaboration with health system leadership, met to identify priority areas in order to integrate and synergize diabetes care resources and efforts to enhance quality and reduce disparities in care across the system. Key gaps in care identified included lack of standardized glucose management policies, lack of standardized training of health care professionals in inpatient diabetes management, and lack of access to automated data collection and analysis. These results were used to gain resources to support collaborative diabetes health system initiatives and to successfully obtain federal research funding to develop and pilot a pragmatic diabetes educational intervention. At a health system level, the summary format of this gap analysis tool is an effective method to clearly identify disparities in care to focus efforts and resources to improve care delivery. Copyright © 2016 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  7. Unlocking the Potential of Time-Driven Activity-Based Costing for Small Logistics Companies

    NARCIS (Netherlands)

    Somapa, S.; Cools, M.; Dullaert, W.E.H.

    2012-01-01

    This paper reports on the development of a time-driven activity-based costing (TDABC) model in a small-sized road transport and logistics company. Activity-based costing (ABC) leads to increased accuracy benefiting decision-making, but the costs of implementation can be high. TDABC tries to overcome

  8. Time-Dependent Thermally-Driven Interfacial Flows in Multilayered Fluid Structures

    Science.gov (United States)

    Haj-Hariri, Hossein; Borhan, A.

    1996-01-01

    A computational study of thermally-driven convection in multilayered fluid structures will be performed to examine the effect of interactions among deformable fluid-fluid interfaces on the structure of time-dependent flow in these systems. Multilayered fluid structures in two models configurations will be considered: the differentially heated rectangular cavity with a free surface, and the encapsulated cylindrical liquid bridge. An extension of a numerical method developed as part of our recent NASA Fluid Physics grant will be used to account for finite deformations of fluid-fluid interfaces.

  9. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  10. Particle-in-cell modeling of the nanosecond field emission driven discharge in pressurized hydrogen

    Science.gov (United States)

    Levko, Dmitry; Yatom, Shurik; Krasik, Yakov E.

    2018-02-01

    The high-voltage field-emission driven nanosecond discharge in pressurized hydrogen is studied using the one-dimensional Particle-in-Cell Monte Carlo collision model. It is obtained that the main part of the field-emitted electrons becomes runaway in the thin cathode sheath. These runaway electrons propagate the entire cathode-anode gap, creating rather dense (˜1012 cm-3) seeding plasma electrons. In addition, these electrons initiate a streamer propagating through this background plasma with a speed ˜30% of the speed of light. Such a high streamer speed allows the self-acceleration mechanism of runaway electrons present between the streamer head and the anode to be realized. As a consequence, the energy of runaway electrons exceeds the cathode-anode gap voltage. In addition, the influence of the field emission switching-off time is analyzed. It is obtained that this time significantly influences the discharge dynamics.

  11. Stability of driven systems with growing gaps, quantum rings, and Wannier ladders

    Czech Academy of Sciences Publication Activity Database

    Asch, J.; Duclos, P.; Exner, Pavel

    1998-01-01

    Roč. 92, 5/6 (1998), s. 1053-1070 ISSN 0022-4715 R&D Projects: GA AV ČR(CZ) IAA148409 Keywords : quantum stability * energy expectations * driven rings Subject RIV: BE - Theoretical Physics Impact factor: 1.469, year: 1998

  12. Dependence of beam emittance on plasma electrode temperature and rf-power, and filter-field tuning with center-gapped rod-filter magnets in J-PARC rf-driven H− ion source

    International Nuclear Information System (INIS)

    Ueno, A.; Koizumi, I.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2014-01-01

    The prototype rf-driven H − ion-source with a nickel plated oxygen-free-copper (OFC) plasma chamber, which satisfies the Japan Proton Accelerator Research Complex (J-PARC) 2nd stage requirements of a H − ion beam current of 60 mA within normalized emittances of 1.5 π mm mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500 μs × 25 Hz) and a life-time of more than 50 days, was reported at the 3rd international symposium on negative ions, beams, and sources (NIBS2012). The experimental results of the J-PARC ion source with a plasma chamber made of stainless-steel, instead of nickel plated OFC used in the prototype source, are presented in this paper. By comparing these two sources, the following two important results were acquired. One was that the about 20% lower emittance was produced by the rather low plasma electrode (PE) temperature (T PE ) of about 120 °C compared with the typically used T PE of about 200 °C to maximize the beam current for the plasma with the abundant cesium (Cs). The other was that by using the rod-filter magnets with a gap at each center and tuning the gap-lengths, the filter-field was optimized and the rf-power necessary to produce the J-PARC required H − ion beam current was reduced typically 18%. The lower rf-power also decreases the emittances

  13. Fundamentals for a terahertz-driven electron gun

    DEFF Research Database (Denmark)

    Lange, Simon Lehnskov; Olsen, Filip D.; Iwaszczuk, Krzysztof

    2017-01-01

    dipoles placed with a small gap in between. We conclude that it is possible to make ultra-bright electron bunches shorter than 1 ps and accelerate them to the low keV range over 15 mu m using only a single THz transient. Our results are fundamental to understand and build a THz-driven electron gun....

  14. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  15. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    Science.gov (United States)

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  16. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  17. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  18. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  19. Increasing value in plagiocephaly care: a time-driven activity-based costing pilot study.

    Science.gov (United States)

    Inverso, Gino; Lappi, Michael D; Flath-Sporn, Susan J; Heald, Ronald; Kim, David C; Meara, John G

    2015-06-01

    Process management within a health care setting is poorly understood and often leads to an incomplete understanding of the true costs of patient care. Using time-driven activity-based costing methods, we evaluated the high-volume, low-complexity diagnosis of plagiocephaly to increase value within our clinic. A total of 59 plagiocephaly patients were evaluated in phase 1 (n = 31) and phase 2 (n = 28) of this study. During phase 1, a process map was created, encompassing each of the 5 clinicians and administrative personnel delivering 23 unique activities. After analysis of the phase 1 process maps, average times as well as costs of these activities were evaluated for potential modifications in workflow. These modifications were implemented in phase 2 to determine overall impact on visit-time and costs of care. Improvements in patient education, workflow coordination, and examination room allocation were implemented during phase 2, resulting in a reduced patient visit-time of 13:25 (19.9% improvement) and an increased cost of $8.22 per patient (7.7% increase) due to changes in physician process times. However, this increased cost was directly offset by the availability of 2 additional appointments per day, potentially generating $7904 of additional annual revenue. Quantifying the impact of a 19.9% reduction in patient visit-time at an increased cost of 7.7% resulted in an increased value ratio of 1.113. This pilot study effectively demonstrates the novel use of time-driven activity-based costing in combination with the value equation as a metric for continuous process improvement programs within the health care setting.

  20. Robust gap repair in the contractile ring ensures timely completion of cytokinesis.

    Science.gov (United States)

    Silva, Ana M; Osório, Daniel S; Pereira, Antonio J; Maiato, Helder; Pinto, Inês Mendes; Rubinstein, Boris; Gassmann, Reto; Telley, Ivo Andreas; Carvalho, Ana Xavier

    2016-12-19

    Cytokinesis in animal cells requires the constriction of an actomyosin contractile ring, whose architecture and mechanism remain poorly understood. We use laser microsurgery to explore the biophysical properties of constricting rings in Caenorhabditis elegans embryos. Laser cutting causes rings to snap open. However, instead of disintegrating, ring topology recovers and constriction proceeds. In response to severing, a finite gap forms and is repaired by recruitment of new material in an actin polymerization-dependent manner. An open ring is able to constrict, and rings repair from successive cuts. After gap repair, an increase in constriction velocity allows cytokinesis to complete at the same time as controls. Our analysis demonstrates that tension in the ring increases while net cortical tension at the site of ingression decreases throughout constriction and suggests that cytokinesis is accomplished by contractile modules that assemble and contract autonomously, enabling local repair of the actomyosin network. Consequently, cytokinesis is a highly robust process impervious to discontinuities in contractile ring structure. © 2016 Silva et al.

  1. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  2. Problems in Analyzing Time Series with Gaps and Their Solution with the WinABD Software Package

    Science.gov (United States)

    Desherevskii, A. V.; Zhuravlev, V. I.; Nikolsky, A. N.; Sidorin, A. Ya.

    2017-12-01

    Technologies for the analysis of time series with gaps are considered. Some algorithms of signal extraction (purification) and evaluation of its characteristics, such as rhythmic components, are discussed for series with gaps. Examples are given for the analysis of data obtained during long-term observations at the Garm geophysical test site and in other regions. The technical solutions used in the WinABD software are considered to most efficiently arrange the operation of relevant algorithms in the presence of observational defects.

  3. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  4. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    Purpose: To analyze treatment delivery variances for 3-D conformal therapy performed at various levels of treatment delivery automation, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system. Materials and Methods: All external beam treatments performed in our department during six months of 1996 were analyzed to study treatment delivery variances versus treatment complexity. Treatments for 505 patients (40,641 individual treatment ports) on four treatment machines were studied. All treatment variances noted by treatment therapists or quality assurance reviews (39 in all) were analyzed. Machines 'M1' (CLinac (6(100))) and 'M2' (CLinac 1800) were operated in a standard manual setup mode, with no record and verify system (R/V). Machines 'M3' (CLinac 2100CD/MLC) and ''M4'' (MM50 racetrack microtron system with MLC) treated patients under the control of a computer-controlled conformal radiotherapy system (CCRS) which 1) downloads the treatment delivery plan from the planning system, 2) performs some (or all) of the machine set-up and treatment delivery for each field, 3) monitors treatment delivery, 4) records all treatment parameters, and 5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3, so it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments (ports), non-axial and non-coplanar plans, multi-segment intensity modulation, and pseudo-isocentric treatments (and other plans with computer-controlled table motions). Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines, so this analysis

  5. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  6. Uncertainty relations and topological-band insulator transitions in 2D gapped Dirac materials

    International Nuclear Information System (INIS)

    Romera, E; Calixto, M

    2015-01-01

    Uncertainty relations are studied for a characterization of topological-band insulator transitions in 2D gapped Dirac materials isostructural with graphene. We show that the relative or Kullback–Leibler entropy in position and momentum spaces, and the standard variance-based uncertainty relation give sharp signatures of topological phase transitions in these systems. (paper)

  7. Could Trends in Time Children Spend with Parents Help Explain the Black-White Gap in Human Capital? Evidence from the American Time Use Survey

    Science.gov (United States)

    Patterson, Richard W.

    2017-01-01

    It is widely believed that the time children spend with parents significantly impacts human capital formation. If time varies significantly between black and white children, this may help explain the large racial gap in test scores and wages. In this study, I use data from the American Time Use Survey to examine the patterns in the time black and…

  8. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  9. Structural Dynamics of Tropical Moist Forest Gaps

    Science.gov (United States)

    Hunter, Maria O.; Keller, Michael; Morton, Douglas; Cook, Bruce; Lefsky, Michael; Ducey, Mark; Saleska, Scott; de Oliveira, Raimundo Cosme; Schietti, Juliana

    2015-01-01

    Gap phase dynamics are the dominant mode of forest turnover in tropical forests. However, gap processes are infrequently studied at the landscape scale. Airborne lidar data offer detailed information on three-dimensional forest structure, providing a means to characterize fine-scale (1 m) processes in tropical forests over large areas. Lidar-based estimates of forest structure (top down) differ from traditional field measurements (bottom up), and necessitate clear-cut definitions unencumbered by the wisdom of a field observer. We offer a new definition of a forest gap that is driven by forest dynamics and consistent with precise ranging measurements from airborne lidar data and tall, multi-layered tropical forest structure. We used 1000 ha of multi-temporal lidar data (2008, 2012) at two sites, the Tapajos National Forest and Ducke Reserve, to study gap dynamics in the Brazilian Amazon. Here, we identified dynamic gaps as contiguous areas of significant growth, that correspond to areas > 10 m2, with height gap at Tapajos National Forest (4.8 %) as compared to Ducke Reserve (2.0 %). On average, gaps were smaller at Ducke Reserve and closed slightly more rapidly, with estimated height gains of 1.2 m y-1 versus 1.1 m y-1 at Tapajos. At the Tapajos site, height growth in gap centers was greater than the average height gain in gaps (1.3 m y-1 versus 1.1 m y-1). Rates of height growth between lidar acquisitions reflect the interplay between gap edge mortality, horizontal ingrowth and gap size at the two sites. We estimated that approximately 10 % of gap area closed via horizontal ingrowth at Ducke Reserve as opposed to 6 % at Tapajos National Forest. Height loss (interpreted as repeat damage and/or mortality) and horizontal ingrowth accounted for similar proportions of gap area at Ducke Reserve (13 % and 10 %, respectively). At Tapajos, height loss had a much stronger signal (23 % versus 6 %) within gaps. Both sites demonstrate limited gap contagiousness defined by an

  10. Concentration variance decay during magma mixing: a volcanic chronometer.

    Science.gov (United States)

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  11. Stability limits for gap solitons in a Bose-Einstein condensate trapped in a time-modulated optical lattice

    International Nuclear Information System (INIS)

    Mayteevarunyoo, Thawatchai; Malomed, Boris A.

    2006-01-01

    We investigate stability of gap solitons (GSs) in the first two band gaps in the framework of the one-dimensional Gross-Pitaevskii equation, combining the repulsive nonlinearity and a moderately strong optical lattice (OL), which is subjected to ''management,'' in the form of time-periodic modulation of its depth. The analysis is performed for parameters relevant to the experiment, characteristic values of the modulation frequency being ω∼2πx20 Hz. First, we present several GS species in the two band gaps in the absence of the management. These include fundamental solitons and their bound states, as well as a subfundamental soliton in the second gap, featuring two peaks of opposite signs in a single well of the periodic potential. This soliton is always unstable, and quickly transforms into a fundamental GS, losing a considerable part of its norm. In the first band gap (stable) bound states of two fundamental GSs are possible solely with opposite signs, if they are separated by an empty site. Under the periodic modulation of the OL depth, we identify stability regions for various GS species, in terms of ω and modulation amplitude, at fixed values of the soliton's norm, N. In either band gap, the GS species with smallest N has a largest stability area; in the first and second gaps, they are, respectively, the fundamental GS proper, or the one spontaneously generated from the subfundamental soliton. However, with the increase of N, the stability region of every species expands in the first gap, and shrinks in the second one. The outcome of the instability development is also different in the two band gaps: it is destruction of the GS in the first gap, and generation of extra side lobes by unstable GSs in the second one

  12. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  13. A comparison of the wide gap and narrow gap resistive plate chamber

    International Nuclear Information System (INIS)

    Cerron Zeballos, E.; Crotty, I.; Hatzifotiadou, D.; Valverde, J.L.; Neupane, S.; Peskov, V.; Singh, S.; Williams, M.C.S.; Zichichi, A.

    1996-01-01

    In this paper we study the performance of a wide gap RPC and compare it with that of a narrow gap RPC, both operated in avalanche mode. We have studied the total charge produced in the avalanche. We have measured the dependence of the performance with rate. In addition we have considered the effect of the tolerance of gas gap and calculated the power dissipated in these two types of RPC. We find that the narrow gap RPC has better timing ability; however the wide gap has superior rate capability, lower power dissipation in the gas volume and can be constructed with less stringent mechanical tolerances. (orig.)

  14. A comparison of the wide gap and narrow gap resistive plate chamber

    CERN Document Server

    Cerron-Zeballos, E; Hatzifotiadou, D; Lamas-Valverde, J; Neupane, S; Peskov, Vladimir; Singh, S; Williams, M C S; Zichichi, Antonino

    1996-01-01

    In this paper we study the performance of a wide gap RPC and compare it with that of a narrow gap RPC, both operated in avalanche mode. We have studied the total charge produced in the avalanche. We have measured the dependence of the performance with rate. In addition we have considered the effect of the tolerance of gas gap and calculated the power dissipated in these two types of RPC. We find that the narrow gap RPC has better timing ability; however the wide gap has superior rate capability, lower power dissipation in the gas volume and can be constructed with less stringent mechanical tolerances.

  15. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  16. Assessment of Stand-Alone Displays for Time Management in a Creativity-Driven Learning Environment

    DEFF Research Database (Denmark)

    Frimodt-Møller, Søren

    2017-01-01

    This paper considers the pros and cons of stand-alone displays, analog (e.g. billboards, blackboards, whiteboards, large pieces of paper etc.) as well as digital (e.g. large shared screens, digital whiteboards or similar), as tools for time management processes in a creativity-driven learning...

  17. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  18. Gap enhancement in phonon-irradiated superconducting tin films

    International Nuclear Information System (INIS)

    Miller, N.D.; Rutledge, J.E.

    1982-01-01

    We have measured the current-voltage (I-V) characteristics of tin-tin tunnel junctions driven out of equilibrium by a flux of near-thermal phonons from a heater. The reduced ambient temperature was T/T/sub c/ = 0.41. The nonequilibrium I-V curves are compared to equilibrium thermal I-V curves at an elevated temperature chosen to match the total number of quasiparticles. The nonequilibrium curves show a smaller current near zero bias and a larger gap than the thermal curves. This is the first experimental evidence of phonon-induced gap enhancement far below T/sub c/. The results are discussed in terms of the coupled kinetic equations of Chang and Scalapino

  19. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  20. Bridging the Gap: Linking Simulation and Testing

    Energy Technology Data Exchange (ETDEWEB)

    Krajewski, Paul E.; Carsley, John; Stoudt, Mark R.; Hovanski, Yuri

    2012-09-01

    The Materials Genome Initiative (MGI) which is a key enabler for the Advanced Manufacturing Partnership, announced in 2011 by U.S. President Barack Obama, was established to accelerate the development and deployment of advanced materials. The MGI is driven by the need to "bridge the gap" between (I) experimental results and computational analysis to enable the rapid development and validation of new mateirals, and (II) the processes required to convert these materials into useable goods.

  1. Measuring health inequalities over time

    Directory of Open Access Journals (Sweden)

    Gustavo Bergonzoli

    2007-11-01

    Full Text Available Background: several methodologies have been used to measure health inequalities. Most of them do so in a cross- sectional fashion, causing significant loss of information. None of them measure health inequalities in social territories over time. Methods: this article presents two approaches measure health inequalities: one approach consists of a refinement of cross-sectional study, by using the analysis of ANOVA variance (ANOVA procedure to explore whether the gap between social territories is real or due to chance. Several adjustments were made to limit errors inevitably found in multiple comparisons. Polynomial procedures were then applied to identify and evaluate any trends. The second sociales se utilizó approach measures the health gap between social territories or strata (as defined in this study over time using the Poisson regression. These approaches were applied using life expectancy and maternal mortality data from Venezuela. Results: a positive relationship between tendenterritories and life expectancy was found, with a significant cia linal trend. The relation between maternal mortality and materna y territorios sociales fue cuadrática. La medición desocial territories was quadratic. The measurement of the la brecha, gap between least developed social territory and the most, a developed territory showed a gap reduction from the first to the second decade, mainly because of an increase of territorio social maternal mortality in the more developed area, rather than a real improvement in the least developed. Conclusions: study helps to clarify the impact that public policies and interventions have in reducing the health gap. Knowledge that a health gap between social territories can decrease without showing improvement in the least developed sector , is an important finding for monitoring and evaluating health interventions for improving living and health conditions in the population.

  2. System-level power optimization for real-time distributed embedded systems

    Science.gov (United States)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as

  3. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  4. Quantum transitions driven by one-bond defects in quantum Ising rings.

    Science.gov (United States)

    Campostrini, Massimo; Pelissetto, Andrea; Vicari, Ettore

    2015-04-01

    We investigate quantum scaling phenomena driven by lower-dimensional defects in quantum Ising-like models. We consider quantum Ising rings in the presence of a bond defect. In the ordered phase, the system undergoes a quantum transition driven by the bond defect between a magnet phase, in which the gap decreases exponentially with increasing size, and a kink phase, in which the gap decreases instead with a power of the size. Close to the transition, the system shows a universal scaling behavior, which we characterize by computing, either analytically or numerically, scaling functions for the low-level energy differences and the two-point correlation function. We discuss the implications of these results for the nonequilibrium dynamics in the presence of a slowly varying parallel magnetic field h, when going across the first-order quantum transition at h=0.

  5. The gender gap in mobility: A global cross-sectional study

    Directory of Open Access Journals (Sweden)

    Mechakra-Tahiri Samia

    2012-08-01

    Full Text Available Abstract Background Several studies have demonstrated that women have greater mobility disability than men. The goals of this research were: 1 to assess the gender gap in mobility difficulty in 70 countries; 2 to determine whether the gender gap is explained by sociodemographic and health factors; 3 to determine whether the gender gap differs across 6 regions of the world with different degrees of gender equality according to United Nations data. Methods Population-based data were used from the World Health Survey (WHS conducted in 70 countries throughout the world. 276,647 adults aged 18 years and over were recruited from 6 world regions. Mobility was measured by asking the level of difficulty people had moving around in the last 30 days and then creating a dichotomous measure (no difficulty, difficulty. The human development index and the gender-related development index for each country were obtained from the United Nations Development Program website. Poisson regression with Taylor series linearized variance estimation was used. Results Women were more likely than men to report mobility difficulty (38% versus 27%, P  Conclusions These are the first world-wide data to examine the gender gap in mobility. Differences in chronic diseases are the main reasons for this gender gap. The gender gap seems to be greater in regions with the largest loss of human development due to gender inequality.

  6. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  7. Radio frequency identification and time-driven activity based costing:RFID-TDABC application in warehousing

    OpenAIRE

    Bahr, Witold; Price, Brian J

    2016-01-01

    Purpose: This paper extends the use of Radio Frequency Identification (RFID) data for accounting of warehouse costs and services. Time Driven Activity Based Costing (TDABC) methodology is enhanced with the real-time collected RFID data about duration of warehouse activities. This allows warehouse managers to have accurate and instant calculations of costs. The RFID enhanced TDABC (RFID-TDABC) is proposed as a novel application of the RFID technology. Research Approach: Application of RFID-TDA...

  8. A damped oscillator imposes temporal order on posterior gap gene expression in Drosophila

    Science.gov (United States)

    Verd, Berta; Clark, Erik; Wotton, Karl R.; Janssens, Hilde; Jiménez-Guri, Eva; Crombach, Anton

    2018-01-01

    Insects determine their body segments in two different ways. Short-germband insects, such as the flour beetle Tribolium castaneum, use a molecular clock to establish segments sequentially. In contrast, long-germband insects, such as the vinegar fly Drosophila melanogaster, determine all segments simultaneously through a hierarchical cascade of gene regulation. Gap genes constitute the first layer of the Drosophila segmentation gene hierarchy, downstream of maternal gradients such as that of Caudal (Cad). We use data-driven mathematical modelling and phase space analysis to show that shifting gap domains in the posterior half of the Drosophila embryo are an emergent property of a robust damped oscillator mechanism, suggesting that the regulatory dynamics underlying long- and short-germband segmentation are much more similar than previously thought. In Tribolium, Cad has been proposed to modulate the frequency of the segmentation oscillator. Surprisingly, our simulations and experiments show that the shift rate of posterior gap domains is independent of maternal Cad levels in Drosophila. Our results suggest a novel evolutionary scenario for the short- to long-germband transition and help explain why this transition occurred convergently multiple times during the radiation of the holometabolan insects. PMID:29451884

  9. The gender gap in mobility: a global cross-sectional study.

    Science.gov (United States)

    Mechakra-Tahiri, Samia Djemâa; Freeman, Ellen E; Haddad, Slim; Samson, Elodie; Zunzunegui, Maria Victoria

    2012-08-02

    Several studies have demonstrated that women have greater mobility disability than men. The goals of this research were: 1) to assess the gender gap in mobility difficulty in 70 countries; 2) to determine whether the gender gap is explained by sociodemographic and health factors; 3) to determine whether the gender gap differs across 6 regions of the world with different degrees of gender equality according to United Nations data. Population-based data were used from the World Health Survey (WHS) conducted in 70 countries throughout the world. 276,647 adults aged 18 years and over were recruited from 6 world regions. Mobility was measured by asking the level of difficulty people had moving around in the last 30 days and then creating a dichotomous measure (no difficulty, difficulty). The human development index and the gender-related development index for each country were obtained from the United Nations Development Program website. Poisson regression with Taylor series linearized variance estimation was used. Women were more likely than men to report mobility difficulty (38% versus 27%, P gap in mobility difficulty, while the Western Pacific region, with the smallest loss of human development due to gender inequality, had the smallest gender gap in mobility difficulty. These are the first world-wide data to examine the gender gap in mobility. Differences in chronic diseases are the main reasons for this gender gap. The gender gap seems to be greater in regions with the largest loss of human development due to gender inequality.

  10. CHARACTERIZATION AND EVALUATION OF TIME-DRIVEN ACTIVITY BASED COSTING BASED ON ABC’S DEVELOPMENT

    DEFF Research Database (Denmark)

    Israelsen, Poul; Kristensen, Thomas Borup

    2014-01-01

    The paper provides a description of the development of Activity Based Costing (ABC) in four variants. This is used to characterize and evaluated the changes made in Time-Driven ABC (TDABC). It is found that TDABC in some cases reaches back to cost calculations prior to ABC (e.g. homogenous...

  11. Comparison of GAP-3 and GAP-4 experiments with conduction freezing calculations

    International Nuclear Information System (INIS)

    Sienicki, J.J.; Spencer, B.W.

    1983-01-01

    Experiments GAP-3 and GAP-4 were performed at ANL to investigate the ability of molten fuel to penetrate downward through the narrow channels separating adjacent subassemblies during an LMFBR hypothetical core disruptive accident. Molten fuel-metal mixtures (81% UO 2 , 19% Mo) at an initial temperature of 3470 0 K generated by a thermite reaction were injected downward into 1 m long rectangular test sections (gap thickness = 0.43 cm, channel width = 20.3 cm) initially at 1170 0 K simulating the nominal Clinch River Breeder Reactor intersubassembly gap. In the GAP-3 test, a prolonged reaction time of approx. 15 s resulted in segregation of the metallic Mo and oxidic UO 2 constituents within the reaction vessel prior to injection. Consequently, Mo entered the test section first and froze, forming a complete plug at a penetration distance of 0.18 m. In GAP-4, the reaction time was reduced to approx. 3 s and the constituents remained well mixed upon injection with the result that the leading edge penetration distance increased to 0.35 m. Posttest examination of the cut-open test sections has revealed the existence of stable insulating crusts upon the underlying steel walls with melting and ablation of the walls only very localized

  12. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  13. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  14. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  15. Using specific heat to scan gaps and anisotropy of MgB2

    International Nuclear Information System (INIS)

    Bouquet, F.; Wang, Y.; Toulemonde, P.; Guritanu, V.; Junod, A.; Eisterer, M.; Weber, H.W.; Lee, S.; Tajima, S.

    2004-01-01

    We performed specific heat measurements to study the superconducting properties of the ∼40 K superconductor MgB 2 , up to 16 T, using polycrystal and single crystal samples. Our results establish the validity of the two-gap model. We tested the effect of disorder by irradiating our sample. This procedure decreased T c down to ∼26 K, but did not suppress completely the smaller gap, at variance with theoretical expectations. A positive effect of the irradiation was the increase of H c2 up to almost 30 T. Our results on the single crystal allow the anisotropy of each band to be determined independently, and show the existence of a cross-over field well below H c2 characterizing the physics of the small-gapped band. We also present preliminary results on Nb 3 Sn, showing similar, but weaker effects

  16. A Business Ecosystem Driven Market Analysis

    DEFF Research Database (Denmark)

    Ma, Zheng; Billanes, Joy Dalmacio; Jørgensen, Bo Nørregaard

    2017-01-01

    Due to the huge globally emerging market of the bright green buildings, this paper aims to develop a business-ecosystem driven market analysis approach for the investigation of the bright green building market. This paper develops a five-steps business-ecosystem driven market analysis (definition...... of the business domain, stakeholder listing, integration of the value chain, relationship mapping, and ego innovation ecosystem mapping.). This paper finds the global-local matters influence the market structure, which the technologies for building energy technology are developed and employed globally......, and the market demand is comparatively localized. The market players can be both local and international stakeholders who involve and collaborate for the building projects. This paper also finds that the building extensibility should be considered into the building design due to the gap between current market...

  17. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  18. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Trends in the earnings gender gap among dentists, physicians, and lawyers.

    Science.gov (United States)

    Nguyen Le, Thanh An; Lo Sasso, Anthony T; Vujicic, Marko

    2017-04-01

    The authors examined the factors associated with sex differences in earnings for 3 professional occupations. The authors used a multivariate Blinder-Oaxaca method to decompose the differences in mean earnings across sex. Although mean differences in earnings between men and women narrowed over time, there remained large, unaccountable earnings differences between men and women among all professions after multivariate adjustments. For dentists, the unexplained difference in earnings for women was approximately constant at 62% to 66%. For physicians, the unexplained difference in earnings for women ranged from 52% to 57%. For lawyers, the unexplained difference in earnings for women was the smallest of the 3 professions but also exhibited the most growth, increasing from 34% in 1990 to 45% in 2010. The reduction in the earnings gap is driven largely by a general convergence between men and women in some, but not all, observable characteristics over time. Nevertheless, large unexplained gender gaps in earnings remain for all 3 professions. Policy makers must use care in efforts to alleviate earnings differences for men and women because measures could make matters worse without a clear understanding of the nature of the factors driving the differences. Copyright © 2017 American Dental Association. Published by Elsevier Inc. All rights reserved.

  20. Right ventrolateral prefrontal cortex mediates individual differences in conflict-driven cognitive control

    Science.gov (United States)

    Egner, Tobias

    2013-01-01

    Conflict adaptation – a conflict-triggered improvement in the resolution of conflicting stimulus or response representations – has become a widely used probe of cognitive control processes in both healthy and clinical populations. Previous functional magnetic resonance imaging (fMRI) studies have localized activation foci associated with conflict resolution to dorsolateral prefrontal cortex (dlPFC). The traditional group-analysis approach employed in these studies highlights regions that are, on average, activated during conflict resolution, but does not necessarily reveal areas mediating individual differences in conflict resolution, because between-subject variance is treated as noise. Here, we employed a complementary approach in order to elucidate the neural bases of variability in the proficiency of conflict-driven cognitive control. We analyzed two independent fMRI data sets of face-word Stroop tasks by using individual variability in the behavioral expression of conflict adaptation as the metric against which brain activation was regressed, while controlling for individual differences in mean reaction time and Stroop interference. Across the two experiments, a replicable neural substrate of individual variation in conflict adaptation was found in ventrolateral prefrontal cortex (vlPFC), specifically, in the right inferior frontal gyrus, pars orbitalis (BA 47). Unbiased regression estimates showed that variability in activity in this region accounted for ~40% of the variance in behavioral expression of conflict adaptation across subjects, thus documenting a heretofore unsuspected key role for vlPFC in mediating conflict-driven adjustments in cognitive control. We speculate that vlPFC plays a primary role in conflict control that is supplemented by dlPFC recruitment under conditions of suboptimal performance. PMID:21568631

  1. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  2. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  3. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  4. Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2009-07-01

    In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.

  5. Stakeholder-Driven Quality Improvement: A Compelling Force for Clinical Practice Guidelines.

    Science.gov (United States)

    Rosenfeld, Richard M; Wyer, Peter C

    2018-01-01

    Clinical practice guideline development should be driven by rigorous methodology, but what is less clear is where quality improvement enters the process: should it be a priority-guiding force, or should it enter only after recommendations are formulated? We argue for a stakeholder-driven approach to guideline development, with an overriding goal of quality improvement based on stakeholder perceptions of needs, uncertainties, and knowledge gaps. In contrast, the widely used topic-driven approach, which often makes recommendations based only on randomized controlled trials, is driven by epidemiologic purity and evidence rigor, with quality improvement a downstream consideration. The advantages of a stakeholder-driven versus a topic-driven approach are highlighted by comparisons of guidelines for otitis media with effusion, thyroid nodules, sepsis, and acute bacterial rhinosinusitis. These comparisons show that stakeholder-driven guidelines are more likely to address the quality improvement needs and pressing concerns of clinicians and patients, including understudied populations and patients with multiple chronic conditions. Conversely, a topic-driven approach often addresses "typical" patients, based on research that may not reflect the needs of high-risk groups excluded from studies because of ethical issues or a desire for purity of research design.

  6. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  7. The interaction between stimulus-driven and goal-driven orienting as revealed by eye movements

    NARCIS (Netherlands)

    Schreij, D.B.B.; Los, S.A.; Theeuwes, J.; Enns, J.T.; Olivers, C.N.L.

    2014-01-01

    It is generally agreed that attention can be captured in a stimulus-driven or in a goal-driven fashion. In studies that investigated both types of capture, the effects on mean manual response time (reaction time [RT]) are generally additive, suggesting two independent underlying processes. However,

  8. Should Students Have a Gap Year? Motivation and Performance Factors Relevant to Time Out after Completing School

    Science.gov (United States)

    Martin, Andrew J.

    2010-01-01

    Increasingly, school leavers are taking time out from study or formal work after completing high school--often referred to as a "gap year" (involving structured activities such as "volunteer tourism" and unstructured activities such as leisure). Although much opinion exists about the merits--or otherwise--of taking time out after completing…

  9. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  10. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  11. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  12. Cost Analysis by Applying Time-Driven Activity Based Costing Method in Container Terminals

    OpenAIRE

    Yaşar, R. Şebnem

    2017-01-01

    Container transportation, which can also be called as “industrialization of maritime transportation”, gained significant ground in the world trade by offering numerous technical and economic advantages, and accordingly the container terminals have grown up in importance. Increased competition between container terminals puts pressure on the ports to reduce costs and increase operational productivity. To have the right cost information constitutes a prerequisite for cost reduction. Time-Driven...

  13. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  14. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    Science.gov (United States)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  15. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    Science.gov (United States)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  16. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  17. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  18. Planet-driven Spiral Arms in Protoplanetary Disks. II. Implications

    Science.gov (United States)

    Bae, Jaehan; Zhu, Zhaohuan

    2018-06-01

    We examine whether various characteristics of planet-driven spiral arms can be used to constrain the masses of unseen planets and their positions within their disks. By carrying out two-dimensional hydrodynamic simulations varying planet mass and disk gas temperature, we find that a larger number of spiral arms form with a smaller planet mass and a lower disk temperature. A planet excites two or more spiral arms interior to its orbit for a range of disk temperatures characterized by the disk aspect ratio 0.04≤slant {(h/r)}p≤slant 0.15, whereas exterior to a planet’s orbit multiple spiral arms can form only in cold disks with {(h/r)}p≲ 0.06. Constraining the planet mass with the pitch angle of spiral arms requires accurate disk temperature measurements that might be challenging even with ALMA. However, the property that the pitch angle of planet-driven spiral arms decreases away from the planet can be a powerful diagnostic to determine whether the planet is located interior or exterior to the observed spirals. The arm-to-arm separations increase as a function of planet mass, consistent with previous studies; however, the exact slope depends on disk temperature as well as the radial location where the arm-to-arm separations are measured. We apply these diagnostics to the spiral arms seen in MWC 758 and Elias 2–27. As shown in Bae et al., planet-driven spiral arms can create concentric rings and gaps, which can produce a more dominant observable signature than spiral arms under certain circumstances. We discuss the observability of planet-driven spiral arms versus rings and gaps.

  19. Calculation of the neutron importance and weighted neutron generation time using MCNIC method in accelerator driven subcritical reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hassanzadeh, M. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Feghhi, S.A.H., E-mail: a_feghhi@sbu.ac.ir [Department of Radiation Application, Shahid Beheshti University, G.C., Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Khalafi, H. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of)

    2013-09-15

    Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ{sup †}) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ{sup †}) and importance weighted neutron generation time (Λ{sup †}) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ{sup †} is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function.

  20. Calculation of the neutron importance and weighted neutron generation time using MCNIC method in accelerator driven subcritical reactors

    International Nuclear Information System (INIS)

    Hassanzadeh, M.; Feghhi, S.A.H.; Khalafi, H.

    2013-01-01

    Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ † ) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ † ) and importance weighted neutron generation time (Λ † ) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ † is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function

  1. Time-dependent quantum chemistry of laser driven many-electron molecules

    International Nuclear Information System (INIS)

    Nguyen-Dang, Thanh-Tung; Couture-Bienvenue, Étienne; Viau-Trudel, Jérémy; Sainjon, Amaury

    2014-01-01

    A Time-Dependent Configuration Interaction approach using multiple Feshbach partitionings, corresponding to multiple ionization stages of a laser-driven molecule, has recently been proposed [T.-T. Nguyen-Dang and J. Viau-Trudel, J. Chem. Phys. 139, 244102 (2013)]. To complete this development toward a fully ab-initio method for the calculation of time-dependent electronic wavefunctions of an N-electron molecule, we describe how tools of multiconfiguration quantum chemistry such as the management of the configuration expansion space using Graphical Unitary Group Approach concepts can be profitably adapted to the new context, that of time-resolved electronic dynamics, as opposed to stationary electronic structure. The method is applied to calculate the detailed, sub-cycle electronic dynamics of BeH 2 , treated in a 3–21G bound-orbital basis augmented by a set of orthogonalized plane-waves representing continuum-type orbitals, including its ionization under an intense λ = 800 nm or λ = 80 nm continuous-wave laser field. The dynamics is strongly non-linear at the field-intensity considered (I ≃ 10 15 W/cm 2 ), featuring important ionization of an inner-shell electron and strong post-ionization bound-electron dynamics

  2. Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation

    Directory of Open Access Journals (Sweden)

    Wantao Jia

    2018-02-01

    Full Text Available We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method.

  3. Variance stabilization for computing and comparing grand mean waveforms in MEG and EEG.

    Science.gov (United States)

    Matysiak, Artur; Kordecki, Wojciech; Sielużycki, Cezary; Zacharias, Norman; Heil, Peter; König, Reinhard

    2013-07-01

    Grand means of time-varying signals (waveforms) across subjects in magnetoencephalography (MEG) and electroencephalography (EEG) are commonly computed as arithmetic averages and compared between conditions, for example, by subtraction. However, the prerequisite for these operations, homogeneity of the variance of the waveforms in time, and for most common parametric statistical tests also between conditions, is rarely met. We suggest that the heteroscedasticity observed instead results because waveforms may differ by factors and additive terms and follow a mixed model. We propose to apply the asinh-transformation to stabilize the variance in such cases. We demonstrate the homogeneous variance and the normal distributions of data achieved by this transformation using simulated waveforms, and we apply it to real MEG data and show its benefits. The asinh-transformation is thus an essential and useful processing step prior to computing and comparing grand mean waveforms in MEG and EEG. Copyright © 2013 Society for Psychophysiological Research.

  4. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  5. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  6. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  7. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  8. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  9. Electron Elevator: Excitations across the Band Gap via a Dynamical Gap State.

    Science.gov (United States)

    Lim, A; Foulkes, W M C; Horsfield, A P; Mason, D R; Schleife, A; Draeger, E W; Correa, A A

    2016-01-29

    We use time-dependent density functional theory to study self-irradiated Si. We calculate the electronic stopping power of Si in Si by evaluating the energy transferred to the electrons per unit path length by an ion of kinetic energy from 1 eV to 100 keV moving through the host. Electronic stopping is found to be significant below the threshold velocity normally identified with transitions across the band gap. A structured crossover at low velocity exists in place of a hard threshold. An analysis of the time dependence of the transition rates using coupled linear rate equations enables one of the excitation mechanisms to be clearly identified: a defect state induced in the gap by the moving ion acts like an elevator and carries electrons across the band gap.

  10. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  11. Triple photonic band-gap structure dynamically induced in the presence of spontaneously generated coherence

    International Nuclear Information System (INIS)

    Gao Jinwei; Bao Qianqian; Wan Rengang; Cui Cuili; Wu Jinhui

    2011-01-01

    We study a cold atomic sample coherently driven into the five-level triple-Λ configuration for attaining a dynamically controlled triple photonic band-gap structure. Our numerical calculations show that three photonic band gaps with homogeneous reflectivities up to 92% can be induced on demand around the probe resonance by a standing-wave driving field in the presence of spontaneously generated coherence. All these photonic band gaps are severely malformed with probe reflectivities declining rapidly to very low values when spontaneously generated coherence is gradually weakened. The triple photonic band-gap structure can also be attained in a five-level chain-Λ system of cold atoms in the absence of spontaneously generated coherence, which however requires two additional traveling-wave fields to couple relevant levels.

  12. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  13. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  14. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  15. Hollow-core photonic band gap fibers for particle acceleration

    Directory of Open Access Journals (Sweden)

    Robert J. Noble

    2011-12-01

    Full Text Available Photonic band gap (PBG dielectric fibers with hollow cores are being studied both theoretically and experimentally for use as laser driven accelerator structures. The hollow core functions as both a longitudinal waveguide for the transverse-magnetic (TM accelerating fields and a channel for the charged particles. The dielectric surrounding the core is permeated by a periodic array of smaller holes to confine the mode, forming a photonic crystal fiber in which modes exist in frequency passbands, separated by band gaps. The hollow core acts as a defect which breaks the crystal symmetry, and so-called defect, or trapped modes having frequencies in the band gap will only propagate near the defect. We describe the design of 2D hollow-core PBG fibers to support TM defect modes with high longitudinal fields and high characteristic impedance. Using as-built dimensions of industrially made fibers, we perform a simulation analysis of prototype PBG fibers with dimensions appropriate for speed-of-light TM modes.

  16. Recent advances towards azobenzene-based light-driven real-time information-transmitting materials

    Directory of Open Access Journals (Sweden)

    Jaume García-Amorós

    2012-07-01

    Full Text Available Photochromic switches that are able to transmit information in a quick fashion have attracted a growing interest within materials science during the last few decades. Although very fast photochromic switching materials working within hundreds of nanoseconds based on other chromophores, such as spiropyranes, have been successfully achieved, reaching such fast relaxation times for azobenzene-based photochromic molecular switches is still a challenge. This review focuses on the most recent achievements on azobenzene-based light-driven real-time information-transmitting systems. Besides, the main relationships between the structural features of the azo-chromophore and the thermal cis-to-trans isomerisation, the kinetics and mechanism are also discussed as a key point for reaching azoderivatives endowed with fast thermal back-isomerisation kinetics.

  17. Modeling imperfectly repaired system data via grey differential equations with unequal-gapped times

    International Nuclear Information System (INIS)

    Guo Renkuan

    2007-01-01

    In this paper, we argue that grey differential equation models are useful in repairable system modeling. The arguments starts with the review on GM(1,1) model with equal- and unequal-spaced stopping time sequence. In terms of two-stage GM(1,1) filtering, system stopping time can be partitioned into system intrinsic function and repair effect. Furthermore, we propose an approach to use grey differential equation to specify a semi-statistical membership function for system intrinsic function times. Also, we engage an effort to use GM(1,N) model to model system stopping times and the associated operating covariates and propose an unequal-gapped GM(1,N) model for such analysis. Finally, we investigate the GM(1,1)-embed systematic grey equation system modeling of imperfectly repaired system operating data. Practical examples are given in step-by-step manner to illustrate the grey differential equation modeling of repairable system data

  18. Demand Side Management for the European Supergrid: Occupancy variances of European single-person households

    International Nuclear Information System (INIS)

    Torriti, Jacopo

    2012-01-01

    The prospect of a European Supergrid calls for research on aggregate electricity peak demand and Europe-wide Demand Side Management. No attempt has been made as yet to represent a time-related demand curve of residential electricity consumption at the European level. This article assesses how active occupancy levels of single-person households vary in single-person household in 15 European countries. It makes use of occupancy time-series data from the Harmonised European Time Use Survey database to build European occupancy curves; identify peak occupancy periods; construct time-related electricity demand curves for TV and video watching activities and assess occupancy variances of single-person households. - Highlights: ► Morning peak occupancies of European single households tale place between 7h30 and 7h40. ► Evening peaks take place between 20h10 and 20h20. ► TV and video activities during evening peaks make up about 3.1 GWh of European peak electricity load. ► Baseline and peak occupancy variances vary across countries. ► Baseline and peak occupancy variances can be used as input for Demand Side Management choices.

  19. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  20. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  1. The mean–variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI

    Science.gov (United States)

    Thompson, William H.; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed. PMID:26236216

  2. Time-Driven Activity-Based Costing for Inter-Library Services: A Case Study in a University

    Science.gov (United States)

    Pernot, Eli; Roodhooft, Filip; Van den Abbeele, Alexandra

    2007-01-01

    Although the true costs of inter-library loans (ILL) are unknown, universities increasingly rely on them to provide better library services at lower costs. Through a case study, we show how to perform a time-driven activity-based costing analysis of ILL and provide evidence of the benefits of such an analysis.

  3. Looking for the GAP effect in manual responses and the role of contextual influences in reaction time experiments

    Directory of Open Access Journals (Sweden)

    Faria Jr. A.J.P.

    2004-01-01

    Full Text Available When the offset of a visual stimulus (GAP condition precedes the onset of a target, saccadic reaction times are reduced in relation to the condition with no offset (overlap condition - the GAP effect. However, the existence of the GAP effect for manual responses is still controversial. In two experiments using both simple (Experiment 1, N = 18 and choice key-press procedures (Experiment 2, N = 12, we looked for the GAP effect in manual responses and investigated possible contextual influences on it. Participants were asked to respond to the imperative stimulus that would occur under different experimental contexts, created by varying the array of warning-stimulus intervals (0, 300 and 1000 ms and conditions (GAP and overlap: i intervals and conditions were randomized throughout the experiment; ii conditions were run in different blocks and intervals were randomized; iii intervals were run in different blocks and conditions were randomized. Our data showed that no GAP effect was obtained for any manipulation. The predictability of stimulus occurrence produced the strongest influence on response latencies. In Experiment 1, simple manual responses were shorter when the intervals were blocked (247 ms, P < 0.001 in relation to the other two contexts (274 and 279 ms. Despite the use of choice key-press procedures, Experiment 2 produced a similar pattern of results. A discussion addressing the critical conditions to obtain the GAP effect for distinct motor responses is presented. In short, our data stress the relevance of the temporal allocation of attention for behavioral performance.

  4. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  5. Resonances in a periodically driven bosonic system

    Science.gov (United States)

    Quelle, Anton; Smith, Cristiane Morais

    2017-11-01

    Periodically driven systems are a common topic in modern physics. In optical lattices specifically, driving is at the origin of many interesting phenomena. However, energy is not conserved in driven systems, and under periodic driving, heating of a system is a real concern. In an effort to better understand this phenomenon, the heating of single-band systems has been studied, with a focus on disorder- and interaction-induced effects, such as many-body localization. Nevertheless, driven systems occur in a much wider context than this, leaving room for further research. Here, we fill this gap by studying a noninteracting model, characterized by discrete, periodically spaced energy levels that are unbounded from above. We couple these energy levels resonantly through a periodic drive, and discuss the heating dynamics of this system as a function of the driving protocol. In this way, we show that a combination of stimulated emission and absorption causes the presence of resonant stable states. This will serve to elucidate the conditions under which resonant driving causes heating in quantum systems.

  6. Resonances in a periodically driven bosonic system.

    Science.gov (United States)

    Quelle, Anton; Smith, Cristiane Morais

    2017-11-01

    Periodically driven systems are a common topic in modern physics. In optical lattices specifically, driving is at the origin of many interesting phenomena. However, energy is not conserved in driven systems, and under periodic driving, heating of a system is a real concern. In an effort to better understand this phenomenon, the heating of single-band systems has been studied, with a focus on disorder- and interaction-induced effects, such as many-body localization. Nevertheless, driven systems occur in a much wider context than this, leaving room for further research. Here, we fill this gap by studying a noninteracting model, characterized by discrete, periodically spaced energy levels that are unbounded from above. We couple these energy levels resonantly through a periodic drive, and discuss the heating dynamics of this system as a function of the driving protocol. In this way, we show that a combination of stimulated emission and absorption causes the presence of resonant stable states. This will serve to elucidate the conditions under which resonant driving causes heating in quantum systems.

  7. A Timing-Driven Partitioning System for Multiple FPGAs

    Directory of Open Access Journals (Sweden)

    Kalapi Roy

    1996-01-01

    Full Text Available Field-programmable systems with multiple FPGAs on a PCB or an MCM are being used by system designers when a single FPGA is not sufficient. We address the problem of partitioning a large technology mapped FPGA circuit onto multiple FPGA devices of a specific target technology. The physical characteristics of the multiple FPGA system (MFS pose additional constraints to the circuit partitioning algorithms: the capacity of each FPGA, the timing constraints, the number of I/Os per FPGA, and the pre-designed interconnection patterns of each FPGA and the package. Existing partitioning techniques which minimize just the cut sizes of partitions fail to satisfy the above challenges. We therefore present a timing driven N-way partitioning algorithm based on simulated annealing for technology-mapped FPGA circuits. The signal path delays are estimated during partitioning using a timing model specific to a multiple FPGA architecture. The model combines all possible delay factors in a system with multiple FPGA chips of a target technology. Furthermore, we have incorporated a new dynamic net-weighting scheme to minimize the number of pin-outs for each chip. Finally, we have developed a graph-based global router for pin assignment which can handle the pre-routed connections of our MFS structure. In order to reduce the time spent in the simulated annealing phase of the partitioner, clusters of circuit components are identified by a new linear-time bottom-up clustering algorithm. The annealing-based N-way partitioner executes four times faster using the clusters as opposed to a flat netlist with improved partitioning results. For several industrial circuits, our approach outperforms the recursive min-cut bi-partitioning algorithm by 35% in terms of nets cut. Our approach also outperforms an industrial FPGA partitioner by 73% on average in terms of unroutable nets. Using the performance optimization capabilities in our approach we have successfully partitioned the

  8. Time-driven activity-based costing to identify opportunities for cost reduction in pediatric appendectomy.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2016-12-01

    As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Phase locking of an S-band wide-gap klystron amplifier with high power injection driven by a relativistic backward wave oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Bai Xianchen; Zhang Jiande; Yang Jianhua; Jin Zhenxing [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China)

    2012-12-15

    Theoretical analyses and preliminary experiments on the phase-locking characteristics of an inductively loaded 2-cavity wide-gap klystron amplifier (WKA) with high power injection driven by a GW-class relativistic backward wave oscillator (RBWO) are presented. Electric power of the amplifier and oscillator is supplied by a single accelerator being capable of producing dual electron beams. The well phase-locking effect of the RBWO-WKA system requires the oscillator have good frequency reproducibility and stability from pulse to pulse. Thus, the main switch of the accelerator is externally triggered to stabilize the diode voltage and then the working frequency. In the experiment, frequency of the WKA is linearly locked by the RBWO. With a diode voltage of 530 kV and an input power of {approx}22 MW, an output power of {approx}230 MW with the power gain of {approx}10.2 dB is obtained from the WKA. As the main switch is triggered, the relative phase difference between the RBWO and the WKA is less than {+-}15 Degree-Sign in a single shot, and phase jitter of {+-}11 Degree-Sign is obtained within a series of shots with duration of about 40 ns.

  10. Phase locking of an S-band wide-gap klystron amplifier with high power injection driven by a relativistic backward wave oscillator

    Science.gov (United States)

    Bai, Xianchen; Zhang, Jiande; Yang, Jianhua; Jin, Zhenxing

    2012-12-01

    Theoretical analyses and preliminary experiments on the phase-locking characteristics of an inductively loaded 2-cavity wide-gap klystron amplifier (WKA) with high power injection driven by a GW-class relativistic backward wave oscillator (RBWO) are presented. Electric power of the amplifier and oscillator is supplied by a single accelerator being capable of producing dual electron beams. The well phase-locking effect of the RBWO-WKA system requires the oscillator have good frequency reproducibility and stability from pulse to pulse. Thus, the main switch of the accelerator is externally triggered to stabilize the diode voltage and then the working frequency. In the experiment, frequency of the WKA is linearly locked by the RBWO. With a diode voltage of 530 kV and an input power of ˜22 MW, an output power of ˜230 MW with the power gain of ˜10.2 dB is obtained from the WKA. As the main switch is triggered, the relative phase difference between the RBWO and the WKA is less than ±15° in a single shot, and phase jitter of ±11° is obtained within a series of shots with duration of about 40 ns.

  11. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  12. Using specific heat to scan gaps and anisotropy of MgB{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Bouquet, F.; Wang, Y.; Toulemonde, P.; Guritanu, V.; Junod, A.; Eisterer, M.; Weber, H.W.; Lee, S.; Tajima, S

    2004-08-01

    We performed specific heat measurements to study the superconducting properties of the {approx}40 K superconductor MgB{sub 2}, up to 16 T, using polycrystal and single crystal samples. Our results establish the validity of the two-gap model. We tested the effect of disorder by irradiating our sample. This procedure decreased T{sub c} down to {approx}26 K, but did not suppress completely the smaller gap, at variance with theoretical expectations. A positive effect of the irradiation was the increase of H{sub c2} up to almost 30 T. Our results on the single crystal allow the anisotropy of each band to be determined independently, and show the existence of a cross-over field well below H{sub c2} characterizing the physics of the small-gapped band. We also present preliminary results on Nb{sub 3}Sn, showing similar, but weaker effects.

  13. High pressure gas-filled cermet spark gaps

    International Nuclear Information System (INIS)

    Avilov, Eh.A.; Yur'ev, A.L.

    2000-01-01

    The results of modernization of the R-48 and R-49 spark gaps making it possible to improve their electrical characteristics are presented. The design is described and characteristics of gas-filled cermet spark gaps are presented. By the voltage rise time of 5-6 μs in the Marx generator scheme they provide for the pulse break-through voltage of 120 and 150 kV. By the voltage rise time of 0.5-1 μs the break-through voltage of these spark gaps may be increased up to 130 and 220 kV. The proper commutation time is equal to ≤ 0.5 ns. Practical recommendations relative to designing cermet spark gaps are given [ru

  14. Multilevel fluidic flow control in a rotationally-driven polyester film microdevice created using laser print, cut and laminate.

    Science.gov (United States)

    Ouyang, Yiwen; Li, Jingyi; Phaneuf, Christopher; Riehl, Paul S; Forest, Craig; Begley, Matthew; Haverstick, Doris M; Landers, James P

    2016-01-21

    This paper presents a simple and cost-effective polyester toner microchip fabricated with laser print and cut lithography (PCL) to use with a battery-powered centrifugal platform for fluid handling. The combination of the PCL microfluidic disc and centrifugal platform: (1) allows parallel aliquoting of two different reagents of four different volumes ranging from nL to μL with an accuracy comparable to a piston-driven air pipette; (2) incorporates a reciprocating mixing unit driven by a surface-tension pump for further dilution of reagents, and (3) is amenable to larger scale integration of assay multiplexing (including all valves and mixers) without substantially increasing fabrication cost and time. For a proof of principle, a 10 min colorimetric assay for the quantitation of the protein level in the human blood plasma samples is demonstrated on chip with a limit of detection of ∼5 mg mL(-1) and coefficient of variance of ∼7%.

  15. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  16. Gap-filling of dry weather flow rate and water quality measurements in urban catchments by a time series modelling approach

    DEFF Research Database (Denmark)

    Sandoval, Santiago; Vezzaro, Luca; Bertrand-Krajewski, Jean-Luc

    2016-01-01

    seeks to evaluate the potential of the Singular Spectrum Analysis (SSA), a time-series modelling/gap-filling method, to complete dry weather time series. The SSA method is tested by reconstructing 1000 artificial discontinuous time series, randomly generated from real flow rate and total suspended......Flow rate and water quality dry weather time series in combined sewer systems might contain an important amount of missing data due to several reasons, such as failures related to the operation of the sensor or additional contributions during rainfall events. Therefore, the approach hereby proposed...... solids (TSS) online measurements (year 2007, 2 minutes time-step, combined system, Ecully, Lyon, France). Results show up the potential of the method to fill gaps longer than 0.5 days, especially between 0.5 days and 1 day (mean NSE > 0.6) in the flow rate time series. TSS results still perform very...

  17. Innovation diffusion on time-varying activity driven networks

    Science.gov (United States)

    Rizzo, Alessandro; Porfiri, Maurizio

    2016-01-01

    Since its introduction in the 1960s, the theory of innovation diffusion has contributed to the advancement of several research fields, such as marketing management and consumer behavior. The 1969 seminal paper by Bass [F.M. Bass, Manag. Sci. 15, 215 (1969)] introduced a model of product growth for consumer durables, which has been extensively used to predict innovation diffusion across a range of applications. Here, we propose a novel approach to study innovation diffusion, where interactions among individuals are mediated by the dynamics of a time-varying network. Our approach is based on the Bass' model, and overcomes key limitations of previous studies, which assumed timescale separation between the individual dynamics and the evolution of the connectivity patterns. Thus, we do not hypothesize homogeneous mixing among individuals or the existence of a fixed interaction network. We formulate our approach in the framework of activity driven networks to enable the analysis of the concurrent evolution of the interaction and individual dynamics. Numerical simulations offer a systematic analysis of the model behavior and highlight the role of individual activity on market penetration when targeted advertisement campaigns are designed, or a competition between two different products takes place.

  18. Observation of gap inhomogeneity in superconducting aluminum tunnel junctions

    International Nuclear Information System (INIS)

    Gilmartin, H.R.

    1982-01-01

    Experiments using a novel technique to investigate spatial variations in the superconducting gap parameter of aluminum films driven out of equilibrium by intense tunnel injection are described. The technique features fine spatial and energy resolution of the gap parameter. The experiments employed a finely focused laser spot scanned across the surface of a double tunnel junction sandwich to produce a very weak electrical signal that was analyzed to determine the gap parameter as a function of position in the plane of the device. Technical aspects of the problem are emphasized, since a new technique is presented. An elaborate explanation of the origin and analysis of the laser induced signal is given, as well as a detailed description of the experimental apparatus. Very briefly, the principle of operation is that a large flux of quasiparticles is injected through the lower junction of the sandwich into the middle aluminum film, and the upper junction serves to detect the effects of that injection. The middle film takes on two or more values of the gap parameter under injection, presumably indicating spatial variation. The presence of a small laser spot on a given point on the device perturbs the potential on the detector junction very slightly. That perturbation is measured as a function of bias current to determine the gap parameter of the middle film at that point. The spot is scanned in a raster pattern to produce a picture of the space dependence of the gap parameter

  19. Bridging the gaps: An overview of wood across time and space in diverse rivers

    Science.gov (United States)

    Wohl, Ellen

    2017-02-01

    fluctuations in LW load over time intervals greater than a few years. Other knowledge gaps relate to physical and ecological effects of wood, including the magnitude of flow resistance caused by LW; patterns of wood-related sediment storage for diverse river sizes and channel geometry; quantification of channel-floodplain-LW interactions; and potential threshold effects of LW in relation to physical processes and biotic communities. Finally, knowledge gaps are related to management of large wood and river corridors, including understanding the consequences of enormous historical reductions in LW load in rivers through the forested portions of the temperate zone; and how to effectively reintroduce and manage existing LW in river corridors, which includes enhancing public understanding of the importance of LW. Addressing these knowledge gaps requires more case studies from diverse rivers, as well as more syntheses and metadata analyses.

  20. Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints

    Directory of Open Access Journals (Sweden)

    Baofeng Wang

    2014-01-01

    Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.

  1. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  2. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  3. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  4. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  5. Innovation and Market-Driven Management in Fast Fashion Companies

    OpenAIRE

    Elisa Arrigo

    2010-01-01

    In hyper-competitive markets, innovation is critical for the growth of market-driven companies. An examination of case studies of highly competitive global companies in the fast fashion sector, reveals that detailed understanding of the market, deriving from direct management of their stores, enables Zara, Gap and H&M to develop an innovation management capability. This is a fundamental competitive driver for the company's success.

  6. A quantitative method to track protein translocation between intracellular compartments in real-time in live cells using weighted local variance image analysis.

    Directory of Open Access Journals (Sweden)

    Guillaume Calmettes

    Full Text Available The genetic expression of cloned fluorescent proteins coupled to time-lapse fluorescence microscopy has opened the door to the direct visualization of a wide range of molecular interactions in living cells. In particular, the dynamic translocation of proteins can now be explored in real time at the single-cell level. Here we propose a reliable, easy-to-implement, quantitative image processing method to assess protein translocation in living cells based on the computation of spatial variance maps of time-lapse images. The method is first illustrated and validated on simulated images of a fluorescently-labeled protein translocating from mitochondria to cytoplasm, and then applied to experimental data obtained with fluorescently-labeled hexokinase 2 in different cell types imaged by regular or confocal microscopy. The method was found to be robust with respect to cell morphology changes and mitochondrial dynamics (fusion, fission, movement during the time-lapse imaging. Its ease of implementation should facilitate its application to a broad spectrum of time-lapse imaging studies.

  7. Finite-Time Thermoeconomic Optimization of a Solar-Driven Heat Engine Model

    Directory of Open Access Journals (Sweden)

    Fernando Angulo-Brown

    2011-01-01

    Full Text Available In the present paper, the thermoeconomic optimization of an irreversible solar-driven heat engine model has been carried out by using finite-time/finite-size thermodynamic theory. In our study we take into account losses due to heat transfer across finite time temperature differences, heat leakage between thermal reservoirs and internal irreversibilities in terms of a parameter which comes from the Clausius inequality. In the considered heat engine model, the heat transfer from the hot reservoir to the working fluid is assumed to be Dulong-Petit type and the heat transfer to the cold reservoir is assumed of the Newtonian type. In this work, the optimum performance and two design parameters have been investigated under two objective functions: the power output per unit total cost and the ecological function per unit total cost. The effects of the technical and economical parameters on the thermoeconomic performance have been also discussed under the aforementioned two criteria of performance.

  8. The Genotype and Phenotype (GaP) registry: a living biobank for the analysis of quantitative traits.

    Science.gov (United States)

    Gregersen, Peter K; Klein, Gila; Keogh, Mary; Kern, Marlena; DeFranco, Margaret; Simpfendorfer, Kim R; Kim, Sun Jung; Diamond, Betty

    2015-12-01

    We describe the development of the Genotype and Phenotype (GaP) Registry, a living biobank of normal volunteers who are genotyped for genetic markers related to human disease. Participants in the GaP can be recalled for hypothesis driven study of disease associated genetic variants. The GaP has facilitated functional studies of several autoimmune disease associated loci including Csk, Blk, PDRM1 (Blimp-1) and PTPN22. It is likely that expansion of such living biobank registries will play an important role in studying and understanding the function of disease associated alleles in complex disease.

  9. Complexity of possibly gapped histogram and analysis of histogram

    Science.gov (United States)

    Fushing, Hsieh; Roy, Tania

    2018-02-01

    We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.

  10. Complexity of possibly gapped histogram and analysis of histogram.

    Science.gov (United States)

    Fushing, Hsieh; Roy, Tania

    2018-02-01

    We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.

  11. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  12. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  13. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  14. ABORT GAP CLEANING IN RHIC

    International Nuclear Information System (INIS)

    DREES, A.; AHRENS, L.; III FLILLER, R.; GASSNER, D.; MCINTYRE, G.T.; MICHNOFF, R.; TRBOJEVIC, D.

    2002-01-01

    During the RHIC Au-run in 2001 the 200 MHz storage cavity system was used for the first time. The rebucketing procedure caused significant beam debunching in addition to amplifying debunching due to other mechanisms. At the end of a four hour store, debunched beam could account for approximately 30%-40% of the total beam intensity. Some of it will be in the abort gap. In order to minimize the risk of magnet quenching due to uncontrolled beam losses at the time of a beam dump, a combination of a fast transverse kicker and copper collimators were used to clean the abort gap. This report gives an overview of the gap cleaning procedure and the achieved performance

  15. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  16. The relative importance of pollinator abundance and species richness for the temporal variance of pollination services.

    Science.gov (United States)

    Genung, Mark A; Fox, Jeremy; Williams, Neal M; Kremen, Claire; Ascher, John; Gibbs, Jason; Winfree, Rachael

    2017-07-01

    The relationship between biodiversity and the stability of ecosystem function is a fundamental question in community ecology, and hundreds of experiments have shown a positive relationship between species richness and the stability of ecosystem function. However, these experiments have rarely accounted for common ecological patterns, most notably skewed species abundance distributions and non-random extinction risks, making it difficult to know whether experimental results can be scaled up to larger, less manipulated systems. In contrast with the prolific body of experimental research, few studies have examined how species richness affects the stability of ecosystem services at more realistic, landscape scales. The paucity of these studies is due in part to a lack of analytical methods that are suitable for the correlative structure of ecological data. A recently developed method, based on the Price equation from evolutionary biology, helps resolve this knowledge gap by partitioning the effect of biodiversity into three components: richness, composition, and abundance. Here, we build on previous work and present the first derivation of the Price equation suitable for analyzing temporal variance of ecosystem services. We applied our new derivation to understand the temporal variance of crop pollination services in two study systems (watermelon and blueberry) in the mid-Atlantic United States. In both systems, but especially in the watermelon system, the stronger driver of temporal variance of ecosystem services was fluctuations in the abundance of common bee species, which were present at nearly all sites regardless of species richness. In contrast, temporal variance of ecosystem services was less affected by differences in species richness, because lost and gained species were rare. Thus, the findings from our more realistic landscapes differ qualitatively from the findings of biodiversity-stability experiments. © 2017 by the Ecological Society of America.

  17. Transient Properties of a Bistable System with Delay Time Driven by Non-Gaussian and Gaussian Noises: Mean First-Passage Time

    International Nuclear Information System (INIS)

    Li Dongxi; Xu Wei; Guo Yongfeng; Li Gaojie

    2008-01-01

    The mean first-passage time of a bistable system with time-delayed feedback driven by multiplicative non-Gaussian noise and additive Gaussian white noise is investigated. Firstly, the non-Markov process is reduced to the Markov process through a path-integral approach; Secondly, the approximate Fokker-Planck equation is obtained by applying the unified coloured noise approximation, the small time delay approximation and the Novikov Theorem. The functional analysis and simplification are employed to obtain the approximate expressions of MFPT. The effects of non-Gaussian parameter (measures deviation from Gaussian character) r, the delay time τ, the noise correlation time τ 0 , the intensities D and α of noise on the MFPT are discussed. It is found that the escape time could be reduced by increasing the delay time τ, the noise correlation time τ 0 , or by reducing the intensities D and α. As far as we know, this is the first time to consider the effect of delay time on the mean first-passage time in the stochastic dynamical system

  18. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  19. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  20. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  1. Profit-driven and demand-driven investment growth and fluctuations in different accumulation regimes

    OpenAIRE

    Giovanni Dosi; Mauro Sodini; Maria Enrica Virgillito

    2013-01-01

    The main task of this work is to develope a model able to encompass, at the same time, Keynesian, demand-driven, and Marxian, profit-driven determinants of fluctuations. Our starting point is the Goodwin's model (1967), rephrased in discrete time and extended by means of a coupled dynamics structure. The model entails the combined interaction of a demand effect, which resembles a rudimentary first approximation to an accelerator, and of a hysteresis effect in wage formation in turn affecting ...

  2. Time-driven activity-based costing in an outpatient clinic environment: development, relevance and managerial impact.

    Science.gov (United States)

    Demeere, Nathalie; Stouthuysen, Kristof; Roodhooft, Filip

    2009-10-01

    Healthcare managers are continuously urged to provide better patient services at a lower cost. To cope with these cost pressures, healthcare management needs to improve its understanding of the relevant cost drivers. Through a case study, we show how to perform a time-driven activity-based costing of five outpatient clinic's departments and provide evidence of the benefits of such an analysis.

  3. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  4. Conceptual design of a commercial accelerator driven thorium reactor

    International Nuclear Information System (INIS)

    Fuller, C. G.; Ashworth, R. W.

    2010-01-01

    This paper describes the substantial work done in underpinning and developing the concept design for a commercial 600 MWe, accelerator driven, thorium fuelled, lead cooled, power producing, fast reactor. The Accelerator Driven Thorium Reactor (ADTR TM) has been derived from original work by Carlo Rubbia. Over the period 2007 to 2009 Aker Solutions commissioned this concept design work and, in close collaboration with Rubbia, developed the physics, engineering and business model. Much has been published about the Energy Amplifier concept and accelerator driven systems. This paper concentrates on the unique physics developed during the concept study of the ADTR TM power station and the progress made in engineering and design of the system. Particular attention is paid to where the concept design has moved significantly beyond published material. Description of challenges presented for the engineering and safety of a commercial system and how they will be addressed is included. This covers the defining system parameters, accelerator sizing, core and fuel design issues and, perhaps most importantly, reactivity control. The paper concludes that the work undertaken supports the technical viability of the ADTR TM power station. Several unique features of the reactor mean that it can be deployed in countries with aspirations to gain benefit from nuclear power and, at 600 MWe, it fits a size gap for less mature grid systems. It can provide a useful complement to Generation III, III+ and IV systems through its ability to consume actinides whilst at the same time providing useful power. (authors)

  5. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  6. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  7. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  8. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  9. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  10. Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.

    2004-01-01

    We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad

  11. Studies on a laser driven photoemissive high-brightness electron source and novel photocathodes

    International Nuclear Information System (INIS)

    Geng Rongli; Song Jinhu; Yu Jin

    1997-01-01

    A laser driven photoemissive high-brightness electron source at Beijing University is reported. Through a DC accelerating gap of 100 kV voltage, the device is capable of delivering high-brightness electron beam of 35-100 ps pulse duration when irradiated with a mode-locked YAG laser. The geometry of the gun is optimized with the aid of simulation codes EGUN and POISSON. The results of experimental studies on ion implanted photocathode and cesium telluride photocathode are given. The proposed laser driven superconducting RF gun is also discussed

  12. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  13. Motor equivalence and structure of variance: multi-muscle postural synergies in Parkinson's disease.

    Science.gov (United States)

    Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M; Latash, Mark L

    2017-07-01

    We explored posture-stabilizing multi-muscle synergies with two methods of analysis of multi-element, abundant systems: (1) Analysis of inter-cycle variance; and (2) Analysis of motor equivalence, both quantified within the framework of the uncontrolled manifold (UCM) hypothesis. Data collected in two earlier studies of patients with Parkinson's disease (PD) were re-analyzed. One study compared synergies in the space of muscle modes (muscle groups with parallel scaling of activation) during tasks performed by early-stage PD patients and controls. The other study explored the effects of dopaminergic medication on multi-muscle-mode synergies. Inter-cycle variance and absolute magnitude of the center of pressure displacement across consecutive cycles were quantified during voluntary whole-body sway within the UCM and orthogonal to the UCM space. The patients showed smaller indices of variance within the UCM and motor equivalence compared to controls. The indices were also smaller in the off-drug compared to on-drug condition. There were strong across-subject correlations between the inter-cycle variance within/orthogonal to the UCM and motor equivalent/non-motor equivalent displacements. This study has shown that, at least for cyclical tasks, analysis of variance and analysis of motor equivalence lead to metrics of stability that correlate with each other and show similar effects of disease and medication. These results show, for the first time, intimate links between indices of variance and motor equivalence. They suggest that analysis of motor equivalence, which requires only a handful of trials, could be used broadly in the field of motor disorders to analyze problems with action stability.

  14. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  15. Recent evolutions in costing systems: A literature review of Time-Driven Activity-Based Costing

    OpenAIRE

    Siguenza Guzman, Lorena; Van den Abbeele, Alexandra; Vandewalle, Joos; Verhaaren, Henry; Cattrysse, Dirk

    2013-01-01

    This article provides a comprehensive literature review of Time-Driven Activity Based Costing (TDABC), a relatively new tool to improve the cost allocation to products and services. After a brief overview of traditional costing and activity based costing systems (ABC), a detailed description of the TDABC model is given and a comparison made between this methodology and its predecessor ABC. Thirty-six empirical contributions using TDABC over the period 2004-2012 were reviewed. The results and ...

  16. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  17. Time-resolved lateral spin-caloric transport of optically generated spin packets in n-GaAs

    Science.gov (United States)

    Göbbels, Stefan; Güntherodt, Gernot; Beschoten, Bernd

    2018-05-01

    We report on lateral spin-caloric transport (LSCT) of electron spin packets which are optically generated by ps laser pulses in the non-magnetic semiconductor n-GaAs at K. LSCT is driven by a local temperature gradient induced by an additional cw heating laser. The spatio-temporal evolution of the spin packets is probed using time-resolved Faraday rotation. We demonstrate that the local temperature-gradient induced spin diffusion is solely driven by a non-equilibrium hot spin distribution, i.e. without involvement of phonon drag effects. Additional electric field-driven spin drift experiments are used to verify directly the validity of the non-classical Einstein relation for moderately doped semiconductors at low temperatures for near band-gap excitation.

  18. Fluctuations of charge variance and interaction time for dissipative processes in 27 Al + 27 Al collision

    International Nuclear Information System (INIS)

    Berceanu, I.; Andronic, A.; Duma, M.

    1999-01-01

    The systematic studies of dissipative processes in light systems were completed with experiments dedicated to the measurement of the excitation functions in 19 F + 27 Al and 27 Al + 27 Al systems in order to obtain deeper insight on DNS configuration and its time evolution. The excitation function for 19 F + 27 Al system evidenced fluctuations larger than the statistical errors. Large Z and angular cross correlation coefficients supported their non-statistical nature. The energy dependence of second order observables, namely the second moment of the charge distribution and the product ω·τ (ω - the angular velocity of the DNS and τ its mean lifetime) extracted from the angular distributions were studied for 19 F + 27 Al case. In this contribution we are reporting the preliminary results of similar studies performed for 27 Al + 27 Al case. The variance of the charge distribution were obtained fitting the experimental charge distribution with a Gaussian centered on Z = 13 and the product ω·τ was extracted from the angular distributions. The results for 19 F + 27 Al case are confirmed by a preliminary analysis of the data for 27 Al + 27 Al system. The charge variance and ω·τ excitation functions for Z = 11 fragment are represented together with the excitation function of the cross section. One has to mention that the data for 27 Al + 27 Al system were not corrected for particle evaporation processes. The effect of the evaporation corrections on the excitation function was studied using a Monte Carlo simulation. The α particle evaporation was also included and the evaluation of the particle separation energies was made using experimental masses of the fragments. The excitation functions for 27 Al + 27 Al system for primary and secondary fragments were simulated. No structure due to particle evaporation was observed. The correlated fluctuations in σ Z and ω·τ excitation functions support a stochastic exchange of nucleons as the main mechanism for

  19. The effect of repetitive baseball pitching on medial elbow joint space gapping associated with 2 elbow valgus stressors in high school baseball players.

    Science.gov (United States)

    Hattori, Hiroshi; Akasaka, Kiyokazu; Otsudo, Takahiro; Hall, Toby; Amemiya, Katsuya; Mori, Yoshihisa

    2018-04-01

    To prevent elbow injury in baseball players, various methods have been used to measure medial elbow joint stability with valgus stress. However, no studies have investigated higher levels of elbow valgus stress. This study investigated medial elbow joint space gapping measured ultrasonically resulting from a 30 N valgus stress vs. gravitational valgus stress after a repetitive throwing task. The study included 25 high school baseball players. Each subject pitched 100 times. The ulnohumeral joint space was measured ultrasonographically, before pitching and after each successive block of 20 pitches, with gravity stress or 30 N valgus stress. Two-way repeated measures analysis of variance and Pearson correlation coefficient analysis were used. The 30 N valgus stress produced significantly greater ulnohumeral joint space gapping than gravity stress before pitching and at each successive 20-pitch block (P space gapping increased significantly from baseline after 60 pitches (P space gapping (r = 0.727-0.859, P space gapping before pitching; however, 30 N valgus stress appears to induce greater mechanical stress, which may be preferable when assessing joint instability but also has the potential to be more aggressive. The present results may indicate that constraining factors to medial elbow joint valgus stress matched typical viscoelastic properties of cyclic creep. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  20. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  1. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  2. Time-driven activity-based costing to estimate cost of care at multidisciplinary aerodigestive centers.

    Science.gov (United States)

    Garcia, Jordan A; Mistry, Bipin; Hardy, Stephen; Fracchia, Mary Shannon; Hersh, Cheryl; Wentland, Carissa; Vadakekalam, Joseph; Kaplan, Robert; Hartnick, Christopher J

    2017-09-01

    Providing high-value healthcare to patients is increasingly becoming an objective for providers including those at multidisciplinary aerodigestive centers. Measuring value has two components: 1) identify relevant health outcomes and 2) determine relevant treatment costs. Via their inherent structure, multidisciplinary care units consolidate care for complex patients. However, their potential impact on decreasing healthcare costs is less clear. The goal of this study was to estimate the potential cost savings of treating patients with laryngeal clefts at multidisciplinary aerodigestive centers. Retrospective chart review. Time-driven activity-based costing was used to estimate the cost of care for patients with laryngeal cleft seen between 2008 and 2013 at the Massachusetts Eye and Ear Infirmary Pediatric Aerodigestive Center. Retrospective chart review was performed to identify clinic utilization by patients as well as patient diet outcomes after treatment. Patients were stratified into neurologically complex and neurologically noncomplex groups. The cost of care for patients requiring surgical intervention was five and three times as expensive of the cost of care for patients not requiring surgery for neurologically noncomplex and complex patients, respectively. Following treatment, 50% and 55% of complex and noncomplex patients returned to normal diet, whereas 83% and 87% of patients experienced improved diets, respectively. Additionally, multidisciplinary team-based care for children with laryngeal clefts potentially achieves 20% to 40% cost savings. These findings demonstrate how time-driven activity-based costing can be used to estimate and compare patient costs in multidisciplinary aerodigestive centers. 2c. Laryngoscope, 127:2152-2158, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Vertical hydraulic generators experience with dynamic air gap monitoring

    International Nuclear Information System (INIS)

    Pollock, G.B.; Lyles, J.F.

    1992-01-01

    Until recently, dynamic monitoring of the rotor to stator air gap of hydraulic generators was not practical. Cost effective and reliable dyamic air gap monitoring equipment has been developed in recent years. Dynamic air gap monitoring was originally justified because of the desire of the owner to minimize the effects of catastrophic air gap failure. However, monitoring air gaps on a time basis has been shown to be beneficial by assisting in the assessment of hydraulic generator condition. The air gap monitor provides useful information on rotor and stator condition and generator vibration. The data generated by air gap monitors will assist managers in the decision process with respect to the timing and extent of required maintenance for a particular generating unit

  4. Instantaneous band gap collapse in VO{sub 2} caused by photocarrier doping

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Marc; Wegkamp, Daniel; Wolf, Martin; Staehler, Julia [Fritz-Haber-Institut der MPG, Berlin (Germany); Xian, Lede; Cudazzo, Pierluigi [Univ. del Pais Vasco, San Sebastian (Spain); European Theoretical Spectroscopy Facility (ETSF) (France); Gatti, Matteo [European Theoretical Spectroscopy Facility (ETSF) (France); Ecole Polytechnique, Palaiseau (France); McGahan, Christina L.; Marvel, Robert E.; Haglund, Richard F. [Vanderbilt Univ., Nashville, Tennessee (United States); Rubio, Angel [Fritz-Haber-Institut der MPG, Berlin (Germany); Univ. del Pais Vasco, San Sebastian (Spain); European Theoretical Spectroscopy Facility (ETSF) (France); MPI for the Structure and Dynamics of Matter, Hamburg (Germany)

    2015-07-01

    We have investigated the controversially discussed mechanism of the insulator-to-metal transition (IMT) in VO{sub 2} by means of femtosecond time-resolved photoelectron spectroscopy (trPES). Our data show that photoexcitation transforms insulating monoclinic VO{sub 2} quasi-instantaneously into a metal without an 80 fs structural bottleneck for the photoinduced electronic phase transition. First-principles many-body perturbation theory calculations reveal an ultrahigh sensitivity of the VO{sub 2} band gap to variations of the dynamically screened Coulomb interaction thus supporting the fully electronically driven isostructural IMT indicated by our trPES results. We conclude that the ultrafast band structure renormalization is caused by photoexcitation of carriers from localized V 3d valence states, strongly changing the screening before significant hot-carrier relaxation or ionic motion has occurred.

  5. Spatial distribution and size of small canopy gaps created by Japanese black bears: estimating gap size using dropped branch measurements.

    Science.gov (United States)

    Takahashi, Kazuaki; Takahashi, Kaori

    2013-06-10

    Japanese black bears, a large-bodied omnivore, frequently create small gaps in the tree crown during fruit foraging. However, there are no previous reports of black bear-created canopy gaps. To characterize physical canopy disturbance by black bears, we examined a number of parameters, including the species of trees in which canopy gaps were created, gap size, the horizontal and vertical distribution of gaps, and the size of branches broken to create gaps. The size of black bear-created canopy gaps was estimated using data from branches that had been broken and dropped on the ground. The disturbance regime was characterized by a highly biased distribution of small canopy gaps on ridges, a large total overall gap area, a wide range in gap height relative to canopy height, and diversity in gap size. Surprisingly, the annual rate of bear-created canopy gap formation reached 141.3 m2 ha-1 yr-1 on ridges, which were hot spots in terms of black bear activity. This rate was approximately 6.6 times that of tree-fall gap formation on ridges at this study site. Furthermore, this rate was approximately two to three times that of common tree-fall gap formation in Japanese forests, as reported in other studies. Our findings suggest that the ecological interaction between black bears and fruit-bearing trees may create a unique light regime, distinct from that created by tree falls, which increases the availability of light resources to plants below the canopy.

  6. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  7. Minding the Achievement Gap One Classroom at a Time

    Science.gov (United States)

    Pollock, Jane E.; Ford, Sharon; Black, Margaret M.

    2012-01-01

    Do teachers have the power to close achievement gaps? Here's a book that boldly claims they do and lays out a blueprint for how to do something now to help students who are falling short of standards. Regardless of the student population you need to address--English language learners, special education, or just the unmotivated and hard to…

  8. Enhancement of VUV emission from a coaxial xenon excimer ultraviolet lamp driven by distorted bipolar square voltages

    Energy Technology Data Exchange (ETDEWEB)

    Jou, S.Y.; Hung, C.T.; Chiu, Y.M.; Wu, J.S. [Department of Mechanical Engineering, National Chiao Tung University, Hsinchu (China); Wei, B.Y. [High-Efficiency Gas Discharge Lamps Group, Material and Chemical Research Laboratories, Hsinchu (China)

    2011-12-15

    Enhancement of vacuum UV emission (172 nm VUV) from a coaxial xenon excimer UV lamp (EUV) driven by distorted 50 kHz bipolar square voltages, as compared to that by sinusoidal voltages, is investigated numerically in this paper. A self-consistent radial one-dimensional fluid model, taking into consideration non-local electron energy balance, is employed to simulate the discharge physics and chemistry. The discharge is divided into two three-period portions; these include: the pre-discharge, the discharge (most intense at 172 nm VUV emission) and the post-discharge periods. The results show that the efficiency of VUV emission using the distorted bipolar square voltages is much greater than when using sinusoidal voltages; this is attributed to two major mechanisms. The first is the much larger rate of change of the voltage in bipolar square voltages, in which only the electrons can efficiently absorb the power in a very short period of time. Energetic electrons then generate a higher concentration of metastable (and also excited dimer) xenon that is distributed more uniformly across the gap, for a longer period of time during the discharge process. The second is the comparably smaller amount of ''wasted'' power deposition by Xe{sup +}{sub 2} in the post-discharge period, as driven by distorted bipolar square voltages, because of the nearly vanishing gap voltage caused by the shielding effect resulting from accumulated charges on both dielectric surfaces (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  9. Electrophysiological and psychophysical asymmetries in sensitivity to interaural correlation gaps and implications for binaural integration time.

    Science.gov (United States)

    Lüddemann, Helge; Kollmeier, Birger; Riedel, Helmut

    2016-02-01

    Brief deviations of interaural correlation (IAC) can provide valuable cues for detection, segregation and localization of acoustic signals. This study investigated the processing of such "binaural gaps" in continuously running noise (100-2000 Hz), in comparison to silent "monaural gaps", by measuring late auditory evoked potentials (LAEPs) and perceptual thresholds with novel, iteratively optimized stimuli. Mean perceptual binaural gap duration thresholds exhibited a major asymmetry: they were substantially shorter for uncorrelated gaps in correlated and anticorrelated reference noise (1.75 ms and 4.1 ms) than for correlated and anticorrelated gaps in uncorrelated reference noise (26.5 ms and 39.0 ms). The thresholds also showed a minor asymmetry: they were shorter in the positive than in the negative IAC range. The mean behavioral threshold for monaural gaps was 5.5 ms. For all five gap types, the amplitude of LAEP components N1 and P2 increased linearly with the logarithm of gap duration. While perceptual and electrophysiological thresholds matched for monaural gaps, LAEP thresholds were about twice as long as perceptual thresholds for uncorrelated gaps, but half as long for correlated and anticorrelated gaps. Nevertheless, LAEP thresholds showed the same asymmetries as perceptual thresholds. For gap durations below 30 ms, LAEPs were dominated by the processing of the leading edge of a gap. For longer gap durations, in contrast, both the leading and the lagging edge of a gap contributed to the evoked response. Formulae for the equivalent rectangular duration (ERD) of the binaural system's temporal window were derived for three common window shapes. The psychophysical ERD was 68 ms for diotic and about 40 ms for anti- and uncorrelated noise. After a nonlinear Z-transform of the stimulus IAC prior to temporal integration, ERDs were about 10 ms for reference correlations of ±1 and 80 ms for uncorrelated reference. Hence, a physiologically motivated

  10. Knowledge Gaps

    DEFF Research Database (Denmark)

    Lyles, Marjorie; Pedersen, Torben; Petersen, Bent

    2003-01-01

    The study explores what factors influence the reduction of managers' perceivedknowledge gaps in the context of the environments of foreign markets. Potentialdeterminants are derived from traditional internationalization theory as well asorganizational learning theory, including the concept...... of absorptive capacity. Building onthese literature streams a conceptual model is developed and tested on a set of primarydata of Danish firms and their foreign market operations. The empirical study suggeststhat the factors that pertain to the absorptive capacity concept - capabilities ofrecognizing......, assimilating, and utilizing knowledge - are crucial determinants ofknowledge gap elimination. In contrast, the two factors deemed essential in traditionalinternationalization process theory - elapsed time of operations and experientiallearning - are found to have no or limited effect.Key words...

  11. The performance of GPS time and frequency transfer: comment on ‘A detailed comparison of two continuous GPS carrier-phase time transfer techniques’

    Science.gov (United States)

    Petit, Gérard; Defraigne, Pascale

    2016-06-01

    The paper ‘A detailed comparison of two continuous GPS carrier-phase time transfer techniques’ (Yao et al 2015 Metrologia 52 666) presents the revised RINEX-shift (RRS) method, a technique using ‘classical precise point positioning (PPP)’ solutions on sliding batches and aiming at providing continuous time links. The authors claim the superiority of the RRS technique with respect to ‘classical PPP’ in terms of frequency stability and solving for discontinuities due to data gaps. It is shown here that these conclusions do not rely on physical principles, and are erroneous as they are driven by misinterpreted or corrupted PPP solutions. Using state-of-the-art PPP computation on the same data sets used in Yao et al’s paper (2015 Metrologia 52 666), we show that the stability of RRS is at best similar to that of ‘classical PPP’ (within statistical uncertainties). Furthermore, the RRS method of removing discontinuities in case of data gaps by interpolating the phase data should not be applied systematically as it can cause erroneous clock solutions when the data gaps are associated with a true phase discontinuity.

  12. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  13. Nonequilibrium lattice-driven dynamics of stripes in nickelates using time-resolved x-ray scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lee, W.S.; Kung, Y.F.; Moritz, B.; Coslovich, G.; Kaindl, R.A.; Chuang, Y.D.; Moore, R.G.; Lu, D.H.; Kirchmann, P.S.; Robinson, J.S.; Minitti, M.P.; Dakovski, G.; Schlotter, W.F.; Turner, J.J.; Gerber, S.; Sasagawa, T.; Hussain, Z.; Shen, Z.X.; Devereaux, T.P.

    2017-03-13

    We investigate the lattice coupling to the spin and charge orders in the striped nickelate, La 1.75 Sr 0.25 NiO 4 , using time-resolved resonant x-ray scattering. Lattice-driven dynamics of both spin and charge orders are observed when the pump photon energy is tuned to that of an E u bond- stretching phonon. We present a likely scenario for the behavior of the spin and charge order parameters and its implications using a Ginzburg-Landau theory.

  14. Detecting nonlinearity in time series driven by non-Gaussian noise: the case of river flows

    Directory of Open Access Journals (Sweden)

    F. Laio

    2004-01-01

    Full Text Available Several methods exist for the detection of nonlinearity in univariate time series. In the present work we consider riverflow time series to infer the dynamical characteristics of the rainfall-runoff transformation. It is shown that the non-Gaussian nature of the driving force (rainfall can distort the results of such methods, in particular when surrogate data techniques are used. Deterministic versus stochastic (DVS plots, conditionally applied to the decay phases of the time series, are instead proved to be a suitable tool to detect nonlinearity in processes driven by non-Gaussian (Poissonian noise. An application to daily discharges from three Italian rivers provides important clues to the presence of nonlinearity in the rainfall-runoff transformation.

  15. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  16. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Louis A; Mason, John J.

    2018-04-01

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.

  17. On Business-Driven IT Security Management and Mismatches between Security Requirements in Firms, Industry Standards and Research Work

    Science.gov (United States)

    Frühwirth, Christian

    Industry managers have long recognized the vital importance of information security for their businesses, but at the same time they perceived security as a technology-driven rather then a business-driven field. Today, this notion is changing and security management is shifting from technology- to business-oriented approaches. Whereas there is evidence of this shift in the literature, this paper argues that security standards and academic work have not yet taken it fully into account. We examine whether this disconnect has lead to a misalignment of IT security requirements in businesses versus industry standards and academic research. We conducted 13 interviews with practitioners from 9 different firms to investigate this question. The results present evidence for a significant gap between security requirements in industry standards and actually reported security vulnerabilities. We further find mismatches between the prioritization of security factors in businesses, standards and real-world threats. We conclude that security in companies serves the business need of protecting information availability to keep the business running at all times.

  18. Model-driven design using IEC 61499 a synchronous approach for embedded and automation systems

    CERN Document Server

    Yoong, Li Hsien; Bhatti, Zeeshan E; Kuo, Matthew M Y

    2015-01-01

    This book describes a novel approach for the design of embedded systems and industrial automation systems, using a unified model-driven approach that is applicable in both domains.  The authors illustrate their methodology, using the IEC 61499 standard as the main vehicle for specification, verification, static timing analysis and automated code synthesis.  The well-known synchronous approach is used as the main vehicle for defining an unambiguous semantics that ensures determinism and deadlock freedom. The proposed approach also ensures very efficient implementations either on small-scale embedded devices or on industry-scale programmable automation controllers (PACs). It can be used for both centralized and distributed implementations. Significantly, the proposed approach can be used without the need for any run-time support. This approach, for the first time, blurs the gap between embedded systems and automation systems and can be applied in wide-ranging applications in automotive, robotics, and industri...

  19. Stand dynamics following gap-scale exogenous disturbance in a single cohort mixed species stand in Morgan County, Tennessee

    Science.gov (United States)

    Brian S. Hughett; Wayne K. Clatterbuck

    2014-01-01

    Differences in composition, structure, and growth under canopy gaps created by the mortality of a single stem were analyzed using analysis of variance under two scenarios, with stem removed or with stem left as a standing snag. There were no significant differences in composition and structure of large diameter residual stems within upper canopy strata. Some...

  20. Comparing Novel Multi-Gap Resistive Plate Chamber Models

    Science.gov (United States)

    Stien, Haley; EIC PID Consortium Collaboration

    2016-09-01

    Investigating nuclear structure has led to the fundamental theory of Quantum Chromodynamics. An Electron Ion Collider (EIC) is a proposed accelerator that would further these investigations. In order to prepare for the EIC, there is an active detector research and development effort. One specific goal is to achieve better particle identification via improved Time of Flight (TOF) detectors. A promising option is the Multi-Gap Resistive Plate Chamber (mRPC). These detectors are similar to the more traditional RPCs, but their active gas gaps have dividers to form several thinner gas gaps. These very thin and accurately defined gas gaps improve the timing resolution of the chamber, so the goal is to build an mRPC with the thinnest gaps to achieve the best possible timing resolution. Two different construction techniques have been employed to make two mRPCs. The first technique is to physically separate the gas gaps with sheets of glass that are .2mm thick. The second technique is to 3D print the layered gas gaps. A comparison of these mRPCs and their performances will be discussed and the latest data presented. This research was supported by US DOE MENP Grant DE-FG02-03ER41243.

  1. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  2. Test of freonless operation of resistive plate chambers with glass electrodes--1 mm gas gap vs 2 mm gas gap

    CERN Document Server

    Sakaue, H; Takahashi, T; Teramoto, Y

    2002-01-01

    Non-freon gas mixtures (Ar/iso-C sub 4 H sub 1 sub 0) were tested as the chamber gas for 1 and 2 mm gas gap Resistive Plate Chambers (RPCs) with float glass as the resistive electrodes, operated in the streamer mode. With the narrower (1 mm) gas gap, streamer charge is reduced (approx 1/3), which reduces the dead time (and dead area), associated with each streamer, improving the detection efficiency. The best performance was obtained for two cases: Ar/iso-C sub 4 H sub 1 sub 0 =50/50 and 60/40. For the 50/50 mixture, a detection efficiency of better than 98% was obtained for the 1 mm gap RPC, while the efficiency was 95% for the 2 mm gap RPC, each operated as a double-gap RPC. The measured time resolution (rms) was 1.45+-0.05 (2.52+-0.09) ns for the 1 (2) mm gap RPC for the 50/50 mixture.

  3. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  4. Mean-Variance Portfolio Selection with a Fixed Flow of Investment in ...

    African Journals Online (AJOL)

    We consider a mean-variance portfolio selection problem for a fixed flow of investment in a continuous time framework. We consider a market structure that is characterized by a cash account, an indexed bond and a stock. We obtain the expected optimal terminal wealth for the investor. We also obtain a closed-form ...

  5. Note on Hartman effect in gapped graphene

    International Nuclear Information System (INIS)

    Jahani, D.

    2013-01-01

    In this manuscript the effect of an opening gap on the dwell time corresponding to the electronic tunneling in graphene is explored. It is shown that tunneling time of quasiparticles in passing through junctions of gapped graphene as well as pure flakes is not independent of the barrier thickness and therefore Hartman effect is not observed due to tunneling of relativistic electrons with a finite effective mass in graphene. The numerical results also reveal that the traversal time in gapped graphene is equal to the traversal time in absence of the barrier for a broad range of incident energy. It is also found that the origin of the problem of Hartman effect could be explained in terms of an average-constant behavior of the probability density of the electronic wave under the barrier

  6. Magnetic force driven magnetoelectric effect in bi-cantilever composites

    Science.gov (United States)

    Zhang, Ru; Wu, Gaojian; Zhang, Ning

    2017-12-01

    The magnetic force driven magnetoelectric (ME) effect in bi-cantilever Mn-Zn-Ferrite /PZT composites is presented. Compared with single cantilever, the ME voltage coefficient in bi-cantilever composite is a little lower and the resonance frequency is higher, but the bi-cantilever structure is advantageous for integration. When the magnetic gap is 3 mm, the ME voltage coefficient can achieve 6.2 Vcm-1Oe-1 at resonance under optimum bias field Hm=1030 Oe; when the magnetic gap is 1.5 mm, the ME voltage coefficient can get the value as high as 4.4 Vcm-1Oe-1 under much lower bias field H=340 Oe. The stable ME effect in bi-cantilever composites has important potential application in the design of new type ME device.

  7. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  8. Interprofessional, practice-driven research: reflections of one "community of inquiry" based in acute stroke.

    Science.gov (United States)

    Hubbard, I J; Vyslysel, G; Parsons, M W

    2009-01-01

    Research is often scholarship driven and the findings are then channelled into the practice community on the assumption that it is utilising an evidence-based approach in its service delivery. Because of persisting difficulties in bridging the practice-evidence gap in health care, there has been a call for more active links between researchers and practitioners. The authors were part of an interprofessional research initiative which originated from within an acute stroke clinical community. This research initiative aimed to encourage active participation of health professionals employed in the clinical setting and active collaboration across departments and institutions. On reflection, it appeared that in setting up an interprofessional, practice-driven research collaborative, achievements included the instigation of a community of inquiry and the affording of opportunities for allied health professionals to be actively involved in research projects directly related to their clinical setting. Strategies were put in place to overcome the challenges faced which included managing a demanding and frequently changing workplace, and overcoming differences in professional knowledge, skills and expertise. From our experience, we found that interprofessional, practice-driven research can encourage allied health professionals to bridge the practice-evidence gap, and is a worthwhile experience which we would encourage others to consider.

  9. Ion temperature measurement of indirectly-driven implosions using a geometry-compensated neutron time-of-flight detector

    International Nuclear Information System (INIS)

    Murphy, T.J.; Lerche, R.A.; Bennett, C.; Howe, G.

    1994-05-01

    A geometry-compensated neutron time-of-flight detector has been constructed and used on Nova to measure ion temperatures from indirectly-driven implosions with yields between 2.5 and 5 x 10 9 DD neutrons. The detector, which has an estimated response time of 250 ps, was located 150 cm from the targets. Due to the long decay time of the scintillator, the time-of-flight signal must be unfolded from the measured detector signal. Several methods for determining the width of the neutron energy spectrum from the data have been developed and give similar results. Scattered x rays continue to be a problem for low yield shots, but should be brought under control with adequate shielding

  10. Ion-temperature measurement of indirectly driven implosions using a geometry-compensated neutron time-of-flight detector

    International Nuclear Information System (INIS)

    Murphy, T.J.; Lerche, R.A.; Bennett, C.; Howe, G.

    1995-01-01

    A geometry-compensated neutron time-of-flight detector has been constructed and used on Nova to measure ion temperatures from indirectly driven implosions with yields between 2.5 and 5x10 9 DD neutrons. The detector, which has an estimated respond time of 250 ps, was located 150 cm from the targets. Due to the long decay time of the scintillator, the time-of-flight signal must be unfolded from the measured detector signal. Several methods for determining the width of the neutron energy spectrum from the data have been developed and give similar results. Scattered x rays continue to be a problem for low yield shots, but should be brought under control with adequate shielding

  11. Subjective neighborhood assessment and physical inactivity: An examination of neighborhood-level variance.

    Science.gov (United States)

    Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen

    2018-06-01

    Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  13. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  14. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  15. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  16. Lens intracellular hydrostatic pressure is generated by the circulation of sodium and modulated by gap junction coupling

    Science.gov (United States)

    Gao, Junyuan; Sun, Xiurong; Moore, Leon C.; White, Thomas W.; Brink, Peter R.

    2011-01-01

    We recently modeled fluid flow through gap junction channels coupling the pigmented and nonpigmented layers of the ciliary body. The model suggested the channels could transport the secretion of aqueous humor, but flow would be driven by hydrostatic pressure rather than osmosis. The pressure required to drive fluid through a single layer of gap junctions might be just a few mmHg and difficult to measure. In the lens, however, there is a circulation of Na+ that may be coupled to intracellular fluid flow. Based on this hypothesis, the fluid would cross hundreds of layers of gap junctions, and this might require a large hydrostatic gradient. Therefore, we measured hydrostatic pressure as a function of distance from the center of the lens using an intracellular microelectrode-based pressure-sensing system. In wild-type mouse lenses, intracellular pressure varied from ∼330 mmHg at the center to zero at the surface. We have several knockout/knock-in mouse models with differing levels of expression of gap junction channels coupling lens fiber cells. Intracellular hydrostatic pressure in lenses from these mouse models varied inversely with the number of channels. When the lens’ circulation of Na+ was either blocked or reduced, intracellular hydrostatic pressure in central fiber cells was either eliminated or reduced proportionally. These data are consistent with our hypotheses: fluid circulates through the lens; the intracellular leg of fluid circulation is through gap junction channels and is driven by hydrostatic pressure; and the fluid flow is generated by membrane transport of sodium. PMID:21624945

  17. The early career gender wage gap

    OpenAIRE

    Sami Napari

    2006-01-01

    In Finland the gender wage gap increases significantly during the first 10 years after labor market entry accounting most of the life-time increase in the gender wage gap. This paper focuses on the early career gender wage differences among university graduates and considers several explanations for the gender wage gap based on the human capital theory, job mobility and labor market segregation. Gender differences in the accumulation of experience and in the type of education explain about 16...

  18. Time-driven Activity-based Cost of Fast-Track Total Hip and Knee Arthroplasty

    DEFF Research Database (Denmark)

    Andreasen, Signe E; Holm, Henriette B; Jørgensen, Mira

    2017-01-01

    this between 2 departments with different logistical set-ups. METHODS: Prospective data collection was analyzed using the time-driven activity-based costing method (TDABC) on time consumed by different staff members involved in patient treatment in the perioperative period of fast-track THA and TKA in 2 Danish...... orthopedic departments with standardized fast-track settings, but different logistical set-ups. RESULTS: Length of stay was median 2 days in both departments. TDABC revealed minor differences in the perioperative settings between departments, but the total cost excluding the prosthesis was similar at USD......-track methodology, the result could be a more cost-effective pathway altogether. As THA and TKA are potentially costly procedures and the numbers are increasing in an economical limited environment, the aim of this study is to present baseline detailed economical calculations of fast-track THA and TKA and compare...

  19. Use of narrow gap welding in nuclear power engineering and development of welding equipment at Vitkovice Iron Works (VZSKG), Ostrava

    International Nuclear Information System (INIS)

    Lehar, F.; Sevcik, P.

    1988-01-01

    Briefly discussed are problems related to automatic submerged arc welding into narrow gaps. The said method was tested for the first time at the Vitkovice Iron Works VZSKG for peripheral welds on pressurizers for WWER-440 reactors. The demands are summed up which are put on the welding workplace which must be met for the use of the said technology. The requirements mainly include the provision of the positioning of the welding nozzle towards the weld gap in order to maximally exclude the effect of the welder. An automatic device was designed and manufactured at the VZSKG plant for mounting the welding nozzle on the automatic welding machine manufactured by ESAB which operates on the principle of the flexible compression of the nozzle to the wall of the weld gap. In the bottom part the welding nozzle is provided with a pulley which rolls during welding thereby providing a constant distance to be maintained between the welding wire and the wall of the weld gap. The diameter of the pulley is ruled by the diameter of the welding wire. Provided the clamping part is appropriately adjusted the developed equipment may be used for any type of automatic welding machine with motor driven supports. (Z.M.). 8 figs., 5 tabs., 9 refs

  20. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  1. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  2. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  3. Two-rate periodic protocol with dynamics driven through many cycles

    Science.gov (United States)

    Kar, Satyaki

    2017-02-01

    We study the long time dynamics in closed quantum systems periodically driven via time dependent parameters with two frequencies ω1 and ω2=r ω1 . Tuning of the ratio r there can unleash plenty of dynamical phenomena to occur. Our study includes integrable models like Ising and X Y models in d =1 and the Kitaev model in d =1 and 2 and can also be extended to Dirac fermions in graphene. We witness the wave-function overlap or dynamic freezing that occurs within some small/ intermediate frequency regimes in the (ω1,r ) plane (with r ≠0 ) when the ground state is evolved through a single cycle of driving. However, evolved states soon become steady with long driving, and the freezing scenario gets rarer. We extend the formalism of adiabatic-impulse approximation for many cycle driving within our two-rate protocol and show the near-exact comparisons at small frequencies. An extension of the rotating wave approximation is also developed to gather an analytical framework of the dynamics at high frequencies. Finally we compute the entanglement entropy in the stroboscopically evolved states within the gapped phases of the system and observe how it gets tuned with the ratio r in our protocol. The minimally entangled states are found to fall within the regime of dynamical freezing. In general, the results indicate that the entanglement entropy in our driven short-ranged integrable systems follow a genuine nonarea law of scaling and show a convergence (with a r dependent pace) towards volume scaling behavior as the driving is continued for a long time.

  4. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  5. Implementing NASA's Capability-Driven Approach: Insight into NASA's Processes for Maturing Exploration Systems

    Science.gov (United States)

    Williams-Byrd, Julie; Arney, Dale; Rodgers, Erica; Antol, Jeff; Simon, Matthew; Hay, Jason; Larman, Kevin

    2015-01-01

    NASA is engaged in transforming human spaceflight. The Agency is shifting from an exploration-based program with human activities focused on low Earth orbit (LEO) and targeted robotic missions in deep space to a more sustainable and integrated pioneering approach. Through pioneering, NASA seeks to address national goals to develop the capacity for people to work, learn, operate, live, and thrive safely beyond the Earth for extended periods of time. However, pioneering space involves more than the daunting technical challenges of transportation, maintaining health, and enabling crew productivity for long durations in remote, hostile, and alien environments. This shift also requires a change in operating processes for NASA. The Agency can no longer afford to engineer systems for specific missions and destinations and instead must focus on common capabilities that enable a range of destinations and missions. NASA has codified a capability driven approach, which provides flexible guidance for the development and maturation of common capabilities necessary for human pioneers beyond LEO. This approach has been included in NASA policy and is captured in the Agency's strategic goals. It is currently being implemented across NASA's centers and programs. Throughout 2014, NASA engaged in an Agency-wide process to define and refine exploration-related capabilities and associated gaps, focusing only on those that are critical for human exploration beyond LEO. NASA identified 12 common capabilities ranging from Environmental Control and Life Support Systems to Robotics, and established Agency-wide teams or working groups comprised of subject matter experts that are responsible for the maturation of these exploration capabilities. These teams, called the System Maturation Teams (SMTs) help formulate, guide and resolve performance gaps associated with the identified exploration capabilities. The SMTs are defining performance parameters and goals for each of the 12 capabilities

  6. Extending i-line capabilities through variance characterization and tool enhancement

    Science.gov (United States)

    Miller, Dan; Salinas, Adrian; Peterson, Joel; Vickers, David; Williams, Dan

    2006-03-01

    Continuous economic pressures have moved a large percent of integrated device manufacturing (IDM) operations either overseas or to foundry operations over the last 10 years. These pressures have left the IDM fabs in the U.S. with required COO improvements in order to maintain operations domestically. While the assets of many of these factories are at a very favorable point in the depreciation life cycle, the equipment and processes are constrained to the quality of the equipment in its original state and the degradation over its installed life. With the objective to enhance output and improve process performance, this factory and their primary lithography process tool supplier have been able to extend the usable life of the existing process tools, increase the output of the tool base, and improve the distribution of the CDs on the product produced. Texas Instruments Incorporated lead an investigation with the POLARIS ® Systems & Services business of FSI International to determine the sources of variance in the i-line processing of a wide array of IC device types. Data from the sources of variance were investigated such as PEB temp, PEB delay time, develop recipe, develop time, and develop programming. While PEB processes are a primary driver of acid catalyzed resists, the develop mode is shown in this work to have an overwhelming impact on the wafer to wafer and across wafer CD performance of these i-line processes. These changes have been able to improve the wafer to wafer CD distribution by more than 80 %, and the within wafer CD distribution by more than 50 % while enabling a greater than 50 % increase in lithography cluster throughput. The paper will discuss the contribution from each of the sources of variance and their importance in overall system performance.

  7. The prototype GAPS (pGAPS) experiment

    International Nuclear Information System (INIS)

    Mognet, S.A.I.; Aramaki, T.; Bando, N.; Boggs, S.E.; Doetinchem, P. von; Fuke, H.; Gahbauer, F.H.; Hailey, C.J.; Koglin, J.E.; Madden, N.; Mori, K.; Okazaki, S.; Ong, R.A.; Perez, K.M.; Tajiri, G.; Yoshida, T.; Zweerink, J.

    2014-01-01

    The General Antiparticle Spectrometer (GAPS) experiment is a novel approach for the detection of cosmic ray antiparticles. A prototype GAPS (pGAPS) experiment was successfully flown on a high-altitude balloon in June of 2012. The goals of the pGAPS experiment were: to test the operation of lithium drifted silicon (Si(Li)) detectors at balloon altitudes, to validate the thermal model and cooling concept needed for engineering of a full-size GAPS instrument, and to characterize cosmic ray and X-ray backgrounds. The instrument was launched from the Japan Aerospace Exploration Agency's (JAXA) Taiki Aerospace Research Field in Hokkaido, Japan. The flight lasted a total of 6 h, with over 3 h at float altitude (∼33km). Over one million cosmic ray triggers were recorded and all flight goals were met or exceeded

  8. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  9. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  10. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  11. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  12. The mean and variance of phylogenetic diversity under rarefaction

    OpenAIRE

    Nipperess, David A.; Matsen, Frederick A.

    2013-01-01

    Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...

  13. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  14. Spatially variable stage-driven groundwater-surface water interaction inferred from time-frequency analysis of distributed temperature sensing data

    Science.gov (United States)

    Mwakanyamale, Kisa; Slater, Lee; Day-Lewis, Frederick D.; Elwaseif, Mehrez; Johnson, Carole D.

    2012-01-01

    Characterization of groundwater-surface water exchange is essential for improving understanding of contaminant transport between aquifers and rivers. Fiber-optic distributed temperature sensing (FODTS) provides rich spatiotemporal datasets for quantitative and qualitative analysis of groundwater-surface water exchange. We demonstrate how time-frequency analysis of FODTS and synchronous river stage time series from the Columbia River adjacent to the Hanford 300-Area, Richland, Washington, provides spatial information on the strength of stage-driven exchange of uranium contaminated groundwater in response to subsurface heterogeneity. Although used in previous studies, the stage-temperature correlation coefficient proved an unreliable indicator of the stage-driven forcing on groundwater discharge in the presence of other factors influencing river water temperature. In contrast, S-transform analysis of the stage and FODTS data definitively identifies the spatial distribution of discharge zones and provided information on the dominant forcing periods (≥2 d) of the complex dam operations driving stage fluctuations and hence groundwater-surface water exchange at the 300-Area.

  15. Uncertainties in carbon residence time and NPP-driven carbon uptake in terrestrial ecosystems of the conterminous USA: a Bayesian approach

    Directory of Open Access Journals (Sweden)

    Xuhui Zhou

    2012-10-01

    Full Text Available Carbon (C residence time is one of the key factors that determine the capacity of ecosystem C storage. However, its uncertainties have not been well quantified, especially at regional scales. Assessing uncertainties of C residence time is thus crucial for an improved understanding of terrestrial C sequestration. In this study, the Bayesian inversion and Markov Chain Monte Carlo (MCMC technique were applied to a regional terrestrial ecosystem (TECO-R model to quantify C residence times and net primary productivity (NPP-driven ecosystem C uptake and assess their uncertainties in the conterminous USA. The uncertainty was represented by coefficient of variation (CV. The 13 spatially distributed data sets of C pools and fluxes have been used to constrain TECO-R model for each biome (totally eight biomes. Our results showed that estimated ecosystem C residence times ranged from 16.6±1.8 (cropland to 85.9±15.3 yr (evergreen needleleaf forest with an average of 56.8±8.8 yr in the conterminous USA. The ecosystem C residence times and their CV were spatially heterogeneous and varied with vegetation types and climate conditions. Large uncertainties appeared in the southern and eastern USA. Driven by NPP changes from 1982 to 1998, terrestrial ecosystems in the conterminous USA would absorb 0.20±0.06 Pg C yr−1. Their spatial pattern was closely related to the greenness map in the summer with larger uptake in central and southeast regions. The lack of data or timescale mismatching between the available data and the estimated parameters lead to uncertainties in the estimated C residence times, which together with initial NPP resulted in the uncertainties in the estimated NPP-driven C uptake. The Bayesian approach with MCMC inversion provides an effective tool to estimate spatially distributed C residence time and assess their uncertainties in the conterminous USA.

  16. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  17. The prototype GAPS (pGAPS) experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mognet, S.A.I., E-mail: mognet@astro.ucla.edu [University of California, Los Angeles, CA 90095 (United States); Aramaki, T. [Columbia University, New York, NY 10027 (United States); Bando, N. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Boggs, S.E.; Doetinchem, P. von [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Fuke, H. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Gahbauer, F.H.; Hailey, C.J.; Koglin, J.E.; Madden, N. [Columbia University, New York, NY 10027 (United States); Mori, K.; Okazaki, S. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Ong, R.A. [University of California, Los Angeles, CA 90095 (United States); Perez, K.M.; Tajiri, G. [Columbia University, New York, NY 10027 (United States); Yoshida, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Zweerink, J. [University of California, Los Angeles, CA 90095 (United States)

    2014-01-21

    The General Antiparticle Spectrometer (GAPS) experiment is a novel approach for the detection of cosmic ray antiparticles. A prototype GAPS (pGAPS) experiment was successfully flown on a high-altitude balloon in June of 2012. The goals of the pGAPS experiment were: to test the operation of lithium drifted silicon (Si(Li)) detectors at balloon altitudes, to validate the thermal model and cooling concept needed for engineering of a full-size GAPS instrument, and to characterize cosmic ray and X-ray backgrounds. The instrument was launched from the Japan Aerospace Exploration Agency's (JAXA) Taiki Aerospace Research Field in Hokkaido, Japan. The flight lasted a total of 6 h, with over 3 h at float altitude (∼33km). Over one million cosmic ray triggers were recorded and all flight goals were met or exceeded.

  18. The Los Alamos Gap Stick Test

    Science.gov (United States)

    Preston, Daniel; Hill, Larry; Johnson, Carl

    2015-06-01

    In this paper we describe a novel shock sensitivity test, the Gap Stick Test, which is a generalized variant of the ubiquitous Gap Test. Despite the popularity of the Gap Test, it has some disadvantages: multiple tests must be fired to obtain a single metric, and many tests must be fired to obtain its value to high precision and confidence. Our solution is a test wherein multiple gap tests are joined in series to form a rate stick. The complex re-initiation character of the traditional gap test is thereby retained, but the propagation speed is steady when measured at periodic intervals, and initiation delay in individual segments acts to decrement the average speed. We measure the shock arrival time before and after each inert gap, and compute the average detonation speed through the HE alone (discounting the gap thicknesses). We perform tests for a range of gap thicknesses. We then plot the aforementioned propagation speed as a function of gap thickness. The resulting curve has the same basic structure as a Diameter Effect (DE) curve, and (like the DE curve) terminates at a failure point. Comparison between experiment and hydrocode calculations using ALE3D and the Ignition and Growth reactive burn model calibrated for short duration shock inputs in PBX 9501 is discussed.

  19. Reduced α-stable dynamics for multiple time scale systems forced with correlated additive and multiplicative Gaussian white noise

    Science.gov (United States)

    Thompson, William F.; Kuske, Rachel A.; Monahan, Adam H.

    2017-11-01

    Stochastic averaging problems with Gaussian forcing have been the subject of numerous studies, but far less attention has been paid to problems with infinite-variance stochastic forcing, such as an α-stable noise process. It has been shown that simple linear systems driven by correlated additive and multiplicative (CAM) Gaussian noise, which emerge in the context of reduced atmosphere and ocean dynamics, have infinite variance in certain parameter regimes. In this study, we consider the stochastic averaging of systems where a linear CAM noise process in the infinite variance parameter regime drives a comparatively slow process. We use (semi)-analytical approximations combined with numerical illustrations to compare the averaged process to one that is forced by a white α-stable process, demonstrating consistent properties in the case of large time-scale separation. We identify the conditions required for the fast linear CAM process to have such an influence in driving a slower process and then derive an (effectively) equivalent fast, infinite-variance process for which an existing stochastic averaging approximation is readily applied. The results are illustrated using numerical simulations of a set of example systems.

  20. Current quantization and fractal hierarchy in a driven repulsive lattice gas.

    Science.gov (United States)

    Rotondo, Pietro; Sellerio, Alessandro Luigi; Glorioso, Pietro; Caracciolo, Sergio; Cosentino Lagomarsino, Marco; Gherardi, Marco

    2017-11-01

    Driven lattice gases are widely regarded as the paradigm of collective phenomena out of equilibrium. While such models are usually studied with nearest-neighbor interactions, many empirical driven systems are dominated by slowly decaying interactions such as dipole-dipole and Van der Waals forces. Motivated by this gap, we study the nonequilibrium stationary state of a driven lattice gas with slow-decayed repulsive interactions at zero temperature. By numerical and analytical calculations of the particle current as a function of the density and of the driving field, we identify (i) an abrupt breakdown transition between insulating and conducting states, (ii) current quantization into discrete phases where a finite current flows with infinite differential resistivity, and (iii) a fractal hierarchy of excitations, related to the Farey sequences of number theory. We argue that the origin of these effects is the competition between scales, which also causes the counterintuitive phenomenon that crystalline states can melt by increasing the density.

  1. Current quantization and fractal hierarchy in a driven repulsive lattice gas

    Science.gov (United States)

    Rotondo, Pietro; Sellerio, Alessandro Luigi; Glorioso, Pietro; Caracciolo, Sergio; Cosentino Lagomarsino, Marco; Gherardi, Marco

    2017-11-01

    Driven lattice gases are widely regarded as the paradigm of collective phenomena out of equilibrium. While such models are usually studied with nearest-neighbor interactions, many empirical driven systems are dominated by slowly decaying interactions such as dipole-dipole and Van der Waals forces. Motivated by this gap, we study the nonequilibrium stationary state of a driven lattice gas with slow-decayed repulsive interactions at zero temperature. By numerical and analytical calculations of the particle current as a function of the density and of the driving field, we identify (i) an abrupt breakdown transition between insulating and conducting states, (ii) current quantization into discrete phases where a finite current flows with infinite differential resistivity, and (iii) a fractal hierarchy of excitations, related to the Farey sequences of number theory. We argue that the origin of these effects is the competition between scales, which also causes the counterintuitive phenomenon that crystalline states can melt by increasing the density.

  2. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  3. A virtual power plant model for time-driven power flow calculations

    Directory of Open Access Journals (Sweden)

    Gerardo Guerra

    2017-11-01

    Full Text Available This paper presents the implementation of a custom-made virtual power plant model in OpenDSS. The goal is to develop a model adequate for time-driven power flow calculations in distribution systems. The virtual power plant is modeled as the aggregation of renewable generation and energy storage connected to the distribution system through an inverter. The implemented operation mode allows the virtual power plant to act as a single dispatchable generation unit. The case studies presented in the paper demonstrate that the model behaves according to the specified control algorithm and show how it can be incorporated into the solution scheme of a general parallel genetic algorithm in order to obtain the optimal day-ahead dispatch. Simulation results exhibit a clear benefit from the deployment of a virtual power plant when compared to distributed generation based only on renewable intermittent generation.

  4. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  5. Multi-peak pattern in Multi-gap RPC time-over-threshold distributions and an offline calibration method

    International Nuclear Information System (INIS)

    Yang, R.X.; Li, C.; Sun, Y.J.; Liu, Z.; Wang, X.Z.; Heng, Y.K.; Sun, S.S.; Dai, H.L.; Wu, Z.; An, F.F.

    2017-01-01

    The Beijing Spectrometer (BESIII) has just updated its end-cap Time-of-Flight (ETOF) system, using the Multi-gap Resistive Plate Chamber (MRPC) to replace the current scintillator detectors. These MRPCs shows multi-peak phenomena in their time-over-threshold (TOT) distribution, which was also observed in the Long-strip MRPC built for the RHIC-STAR Muon Telescope Detector (MTD). After carefully investigated the correlation between the multi-peak distribution and incident hit positions along the strips, we find out that it can be semi-quantitatively explained by the signal reflections on the ends of the readout strips. Therefore a new offline calibration method was implemented on the MRPC ETOF data in BESIII, making T-TOT correlation significantly improved to evaluate the time resolution.

  6. "Mind the gap"--the impact of variations in the duration of the treatment gap and overall treatment time in the first UK Anal Cancer Trial (ACT I).

    Science.gov (United States)

    Glynne-Jones, Rob; Sebag-Montefiore, David; Adams, Richard; McDonald, Alec; Gollins, Simon; James, Roger; Northover, John M A; Meadows, Helen M; Jitlal, Mark

    2011-12-01

    The United Kingdom Coordinating Committee on Cancer Research anal cancer trial demonstrated the benefit of combined modality treatment (CMT) using radiotherapy (RT), infusional 5-fluorouracil, and mitomycin C over RT alone. The present study retrospectively examines the impact of the recommended 6-week treatment gap and local RT boost on long-term outcome. A total of 577 patients were randomly assigned RT alone or CMT. After a 6-week gap responders received a boost using either additional external beam radiotherapy (EBRT) (15 Gy) or iridium-192 implant (25 Gy). The effect of boost, the gap between initial treatment (RT alone or CMT) and boost (Tgap), and overall treatment time (OTT) were examined for their impact on outcome. Among the 490 good responders, 436 (89%) patients received a boost after initial treatment. For boosted patients, the risk of anal cancer death decreased by 38% (hazard ratio [HR]: 0.62, 99% CI 0.35-1.12; p=0.04), but there was no evidence this was mediated via a reduction in locoregional failure (LRF) (HR: 0.90, 99% CI 0.48-1.68; p=0.66). The difference in Tgap was only 1.4 days longer for EBRT boost, compared with implant (p=0.51). OTT was longer by 6.1 days for EBRT (p=0.006). Tgap and OTT were not associated with LRF. Radionecrosis was reported in 8% of boosted, compared with 0% in unboosted patients (p=0.03). These results question the benefit of a radiotherapy boost after a 6-week gap. The higher doses of a boost may contribute more to an increased risk of late morbidity, rather than local control. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Enhancement of absorption of lower hybrid wave by filling the spectral gap

    International Nuclear Information System (INIS)

    Ide, S.; Naito, O.; Kondoh, T.; Ikeda, Y.; Ushigusa, K.

    1994-01-01

    The interaction between a lower hybrid wave (LHW) and electrons in a plasma has been investigated. An LHW of low phase velocity was injected into a plasma in addition to a high phase velocity LHW so as to fill the spectral gap which lies between the phase velocity of the faster wave and the thermal velocity of the electrons. It was found that the absorption of the faster wave was enhanced at the plasma outer region by injecting these waves simultaneously. As a result LH-driven current in the inner region of the plasma was reduced by the power absorbed in the outer region. The increase of the power absorption is attributed to the filling of the spectral gap by the slower wave

  8. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  9. Effects of Calcination Holding Time on Properties of Wide Band Gap Willemite Semiconductor Nanoparticles by the Polymer Thermal Treatment Method

    Directory of Open Access Journals (Sweden)

    Ibrahim Mustapha Alibe

    2018-04-01

    Full Text Available Willemite is a wide band gap semiconductor used in modern day technology for optoelectronics application. In this study, a new simple technique with less energy consumption is proposed. Willemite nanoparticles (NPs were produced via a water–based solution consisting of a metallic precursor, polyvinylpyrrolidone (PVP, and underwent a calcination process at 900 °C for several holding times between 1–4 h. The FT–IR and Raman spectra indicated the presence of metal oxide bands as well as the effective removal of PVP. The degree of the crystallization and formation of the NPs were determined by XRD. The mean crystallite size of the NPs was between 18.23–27.40 nm. The morphology, particle shape and size distribution were viewed with HR-TEM and FESEM analysis. The willemite NPs aggregate from the smaller to larger particles with an increase in calcination holding time from 1–4 h with the sizes ranging between 19.74–29.71 nm. The energy values obtained from the experimental band gap decreased with increasing the holding time over the range of 5.39 eV at 1 h to at 5.27 at 4 h. These values match well with band gap obtained from the Mott and Davis model for direct transition. The findings in this study are very promising and can justify the use of these novel materials as a potential candidate for green luminescent optoelectronic applications.

  10. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  11. On a Hele-Shaw flow with a time-dependent gap in the presence of surface tension

    International Nuclear Information System (INIS)

    Savina, T V; Nepomnyashchy, A A

    2015-01-01

    The introduction of surface tension into a Hele-Shaw problem makes it more realistic from the physical viewpoint, but more difficult from the mathematical viewpoint. In this paper we discuss a Hele-Shaw flow with a time-dependent gap taking into account the surface tension of the free boundary. We use the Schwarz function method to find asymptotic solutions for the interior problem in the case when the initial shape of the droplet is a weakly distorted circle. (paper)

  12. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  13. The Gender Wage Gap: Does a Gender Gap in Reservation Wages Play a Part?

    OpenAIRE

    Caliendo, Marco; Lee, Wang-Sheng; Mahlstedt, Robert

    2014-01-01

    This paper focuses on re-examining the gender wage gap and the potential role that reservation wages play. Based on two waves of rich data from the IZA Evaluation Dataset Survey we examine the importance of gender differences in reservation wages to explain the gender gap in realized wages for a sample of newly unemployed individuals actively searching for a full-time job in Germany. The dataset includes measures for education, socio-demographics, labor market history, psychological factors a...

  14. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  15. Mind the gap: implementation challenges break the link between HIV/AIDS research and practice

    Directory of Open Access Journals (Sweden)

    Sarah MacCarthy

    Full Text Available Abstract: Sampling strategies such as respondent-driven sampling (RDS and time-location sampling (TLS offer unique opportunities to access key populations such as men who have sex with men (MSM and transgender women. Limited work has assessed implementation challenges of these methods. Overcoming implementation challenges can improve research quality and increase uptake of HIV services among key populations. Drawing from studies using RDS in Brazil and TLS in Peru, we summarize challenges encountered in the field and potential strategies to address them. In Brazil, study site selection, cash incentives, and seed selection challenged RDS implementation with MSM. In Peru, expansive geography, safety concerns, and time required for study participation complicated TLS implementation with MSM and transgender women. Formative research, meaningful participation of key populations across stages of research, and transparency in study design are needed to link HIV/AIDS research and practice. Addressing implementation challenges can close gaps in accessing services among those most burdened by the epidemic.

  16. Mind the gap: implementation challenges break the link between HIV/AIDS research and practice.

    Science.gov (United States)

    MacCarthy, Sarah; Reisner, Sari; Hoffmann, Michael; Perez-Brumer, Amaya; Silva-Santisteban, Alfonso; Nunn, Amy; Bastos, Leonardo; Vasconcellos, Mauricio Teixeira Leite de; Kerr, Ligia; Bastos, Francisco Inácio; Dourado, Inês

    2016-11-03

    Sampling strategies such as respondent-driven sampling (RDS) and time-location sampling (TLS) offer unique opportunities to access key populations such as men who have sex with men (MSM) and transgender women. Limited work has assessed implementation challenges of these methods. Overcoming implementation challenges can improve research quality and increase uptake of HIV services among key populations. Drawing from studies using RDS in Brazil and TLS in Peru, we summarize challenges encountered in the field and potential strategies to address them. In Brazil, study site selection, cash incentives, and seed selection challenged RDS implementation with MSM. In Peru, expansive geography, safety concerns, and time required for study participation complicated TLS implementation with MSM and transgender women. Formative research, meaningful participation of key populations across stages of research, and transparency in study design are needed to link HIV/AIDS research and practice. Addressing implementation challenges can close gaps in accessing services among those most burdened by the epidemic.

  17. Mind the gap: implementation challenges break the link between HIV/AIDS research and practice

    Science.gov (United States)

    MacCarthy, Sarah; Reisner, Sari; Hoffmann, Michael; Perez-Brumer, Amaya; Silva-Santisteban, Alfonso; Nunn, Amy; Bastos, Leonardo; de Vasconcellos, Mauricio Teixeira Leite; Kerr, Ligia; Bastos, Francisco Inácio; Dourado, Inês

    2018-01-01

    Sampling strategies such as respondent-driven sampling (RDS) and time-location sampling (TLS) offer unique opportunities to access key populations such as men who have sex with men (MSM) and transgender women. Limited work has assessed implementation challenges of these methods. Overcoming implementation challenges can improve research quality and increase uptake of HIV services among key populations. Drawing from studies using RDS in Brazil and TLS in Peru, we summarize challenges encountered in the field and potential strategies to address them. In Brazil, study site selection, cash incentives, and seed selection challenged RDS implementation with MSM. In Peru, expansive geography, safety concerns, and time required for study participation complicated TLS implementation with MSM and transgender women. Formative research, meaningful participation of key populations across stages of research, and transparency in study design are needed to link HIV/AIDS research and practice. Addressing implementation challenges can close gaps in accessing services among those most burdened by the epidemic. PMID:27828609

  18. The longevity gender gap

    DEFF Research Database (Denmark)

    Aviv, Abraham; Shay, Jerry; Christensen, Kaare

    2005-01-01

    In this Perspective, we focus on the greater longevity of women as compared with men. We propose that, like aging itself, the longevity gender gap is exceedingly complex and argue that it may arise from sex-related hormonal differences and from somatic cell selection that favors cells more...... resistant to the ravages of time. We discuss the interplay of these factors with telomere biology and oxidative stress and suggest that an explanation for the longevity gender gap may arise from a better understanding of the differences in telomere dynamics between men and women....

  19. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  20. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    Science.gov (United States)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  1. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  2. Model-driven discovery of underground metabolic functions in Escherichia coli

    DEFF Research Database (Denmark)

    Guzmán, Gabriela I.; Utrilla, José; Nurk, Sergey

    2015-01-01

    -scale models, which have been widely used for predicting growth phenotypes in various environments or following a genetic perturbation; however, these predictions occasionally fail. Failed predictions of gene essentiality offer an opportunity for targeting biological discovery, suggesting the presence......E, and gltA and prpC. This study demonstrates how a targeted model-driven approach to discovery can systematically fill knowledge gaps, characterize underground metabolism, and elucidate regulatory mechanisms of adaptation in response to gene KO perturbations....

  3. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  7. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Active epilepsy prevalence, the treatment gap, and treatment gap risk profile in eastern China: A population-based study.

    Science.gov (United States)

    Ding, Xiaoyan; Zheng, Yang; Guo, Yi; Shen, Chunhong; Wang, Shan; Chen, Feng; Yan, Shengqiang; Ding, Meiping

    2018-01-01

    We measured the prevalence of active epilepsy and investigated the treatment gap and treatment gap risk profile in eastern China. This was a cross-sectional population-based survey conducted in Zhejiang, China, from October 2013 to March 2014. A total 54,976 people were selected using multi-stage cluster sampling. A two-stage questionnaire-based process was used to identify patients with active epilepsy and to record their demographic, socioeconomic, and epilepsy-related features. Logistic regression analysis was used to analyze risk factors of the treatment gap in eastern China, as adjusted for age and sex. We interviewed 50,035 people; 118 had active epilepsy (2.4‰), among which the treatment gap was 58.5%. In multivariate analysis, failure to receive appropriate antiepileptic treatment was associated with higher seizure frequency of 12-23 times per year (adjusted odds ratio=6.874; 95% confidence interval [CI]=2.372-19.918), >24 times per year (adjusted odds ratio=19.623; 95% CI=4.999-77.024), and a lack of health insurance (adjusted odds ratio=7.284; 95% CI=1.321-40.154). Eastern China has relatively lower prevalence of active epilepsy and smaller treatment gap. Interventions aimed at reducing seizure frequency, improving the health insurance system should be investigated as potential targets to further bridge the treatment gap. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Time-driven activity-based costing of multivessel coronary artery bypass grafting across national boundaries to identify improvement opportunities: study protocol.

    Science.gov (United States)

    Erhun, F; Mistry, B; Platchek, T; Milstein, A; Narayanan, V G; Kaplan, R S

    2015-08-25

    Coronary artery bypass graft (CABG) surgery is a well-established, commonly performed treatment for coronary artery disease--a disease that affects over 10% of US adults and is a major cause of morbidity and mortality. In 2005, the mean cost for a CABG procedure among Medicare beneficiaries in the USA was $32, 201 ± $23,059. The same operation reportedly costs less than $2000 to produce in India. The goals of the proposed study are to (1) identify the difference in the costs incurred to perform CABG surgery by three Joint Commission accredited hospitals with reputations for high quality and efficiency and (2) characterise the opportunity to reduce the cost of performing CABG surgery. We use time-driven activity-based costing (TDABC) to quantify the hospitals' costs of producing elective, multivessel CABG. TDABC estimates the costs of a given clinical service by combining information about the process of patient care delivery (specifically, the time and quantity of labour and non-labour resources utilised to perform each activity) with the unit cost of each resource used to provide the care. Resource utilisation was estimated by constructing CABG process maps for each site based on observation of care and staff interviews. Unit costs were calculated as a capacity cost rate, measured as a $/min, for each resource consumed in CABG production. Multiplying together the unit costs and resource quantities and summing across all resources used will produce the average cost of CABG production at each site. We will conclude by conducting a variance analysis of labour costs to reveal opportunities to bend the cost curve for CABG production in the USA. All our methods were exempted from review by the Stanford Institutional Review Board. Results will be published in peer-reviewed journals and presented at scientific meetings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  11. Synchrotron-driven spallation sources

    CERN Document Server

    Bryant, P J

    1996-01-01

    The use of synchrotrons for pulsed neutron spallation sources is an example of scientific and technological spin-off from the accelerator development for particle physics. Accelerator-driven sources provide an alternative to the continuous-flux, nuclear reactors that currently furnish the majority of neutrons for research and development. Although the present demand for neutrons can be adequately met by the existing reactors, this situation is unlikely to continue due to the increasing severity of safety regulations and the declared policies of many countries to close down their reactors within the next decade or so. Since the demand for neutrons as a research tool is, in any case,expected to grow, there has been a corresponding interest in sources that are synchrotron-driven or linac-driven with a pulse compression ring and currently several design studies are being made. These accelerator-driven sources also have the advantage of a time structure with a high peak neutron flux. The basic requirement is for a...

  12. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  13. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  14. Nonlinear optical effects of opening a gap in graphene

    Science.gov (United States)

    Carvalho, David N.; Biancalana, Fabio; Marini, Andrea

    2018-05-01

    Graphene possesses remarkable electronic, optical, and mechanical properties that have taken the research of two-dimensional relativistic condensed matter systems to prolific levels. However, the understanding of how its nonlinear optical properties are affected by relativisticlike effects has been broadly uncharted. It has been recently shown that highly nontrivial currents can be generated in free-standing samples, notably leading to the generation of even harmonics. Since graphene monolayers are centrosymmetric media, for which such harmonic generation at normal incidence is deemed inaccessible, this light-driven phenomenon is both startling and promising. More realistically, graphene samples are often deposited on a dielectric substrate, leading to additional intricate interactions. Here, we present a treatment to study this instance by gapping the spectrum and we show this leads to the appearance of a Berry phase in the carrier dynamics. We analyze the role of such a phase in the generated nonlinear current and conclude that it suppresses odd-harmonic generation. The pump energy can be tuned to the energy gap to yield interference among odd harmonics mediated by interband transitions, allowing even harmonics to be generated. Our results and general methodology pave the way for understanding the role of gap opening in the nonlinear optics of two-dimensional lattices.

  15. Pollinator-driven ecological speciation in plants: new evidence and future perspectives.

    Science.gov (United States)

    Van der Niet, Timotheüs; Peakall, Rod; Johnson, Steven D

    2014-01-01

    The hypothesis that pollinators have been important drivers of angiosperm diversity dates back to Darwin, and remains an important research topic today. Mounting evidence indicates that pollinators have the potential to drive diversification at several different stages of the evolutionary process. Microevolutionary studies have provided evidence for pollinator-mediated floral adaptation, while macroevolutionary evidence supports a general pattern of pollinator-driven diversification of angiosperms. However, the overarching issue of whether, and how, shifts in pollination system drive plant speciation represents a critical gap in knowledge. Bridging this gap is crucial to fully understand whether pollinator-driven microevolution accounts for the observed macroevolutionary patterns. Testable predictions about pollinator-driven speciation can be derived from the theory of ecological speciation, according to which adaptation (microevolution) and speciation (macroevolution) are directly linked. This theory is a particularly suitable framework for evaluating evidence for the processes underlying shifts in pollination systems and their potential consequences for the evolution of reproductive isolation and speciation. This Viewpoint paper focuses on evidence for the four components of ecological speciation in the context of plant-pollinator interactions, namely (1) the role of pollinators as selective agents, (2) floral trait divergence, including the evolution of 'pollination ecotypes', (3) the geographical context of selection on floral traits, and (4) the role of pollinators in the evolution of reproductive isolation. This Viewpoint also serves as the introduction to a Special Issue on Pollinator-Driven Speciation in Plants. The 13 papers in this Special Issue range from microevolutionary studies of ecotypes to macroevolutionary studies of historical ecological shifts, and span a wide range of geographical areas and plant families. These studies further illustrate

  16. Pollinator-driven ecological speciation in plants: new evidence and future perspectives

    Science.gov (United States)

    Van der Niet, Timotheüs; Peakall, Rod; Johnson, Steven D.

    2014-01-01

    Background The hypothesis that pollinators have been important drivers of angiosperm diversity dates back to Darwin, and remains an important research topic today. Mounting evidence indicates that pollinators have the potential to drive diversification at several different stages of the evolutionary process. Microevolutionary studies have provided evidence for pollinator-mediated floral adaptation, while macroevolutionary evidence supports a general pattern of pollinator-driven diversification of angiosperms. However, the overarching issue of whether, and how, shifts in pollination system drive plant speciation represents a critical gap in knowledge. Bridging this gap is crucial to fully understand whether pollinator-driven microevolution accounts for the observed macroevolutionary patterns. Testable predictions about pollinator-driven speciation can be derived from the theory of ecological speciation, according to which adaptation (microevolution) and speciation (macroevolution) are directly linked. This theory is a particularly suitable framework for evaluating evidence for the processes underlying shifts in pollination systems and their potential consequences for the evolution of reproductive isolation and speciation. Scope This Viewpoint paper focuses on evidence for the four components of ecological speciation in the context of plant-pollinator interactions, namely (1) the role of pollinators as selective agents, (2) floral trait divergence, including the evolution of ‘pollination ecotypes‘, (3) the geographical context of selection on floral traits, and (4) the role of pollinators in the evolution of reproductive isolation. This Viewpoint also serves as the introduction to a Special Issue on Pollinator-Driven Speciation in Plants. The 13 papers in this Special Issue range from microevolutionary studies of ecotypes to macroevolutionary studies of historical ecological shifts, and span a wide range of geographical areas and plant families. These studies

  17. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  18. Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix *

    OpenAIRE

    Ismail, Amine; Pham, Huyên

    2016-01-01

    This paper studies a robust continuous-time Markowitz portfolio selection pro\\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide expli...

  19. GapBlaster-A Graphical Gap Filler for Prokaryote Genomes.

    Directory of Open Access Journals (Sweden)

    Pablo H C G de Sá

    Full Text Available The advent of NGS (Next Generation Sequencing technologies has resulted in an exponential increase in the number of complete genomes available in biological databases. This advance has allowed the development of several computational tools enabling analyses of large amounts of data in each of the various steps, from processing and quality filtering to gap filling and manual curation. The tools developed for gap closure are very useful as they result in more complete genomes, which will influence downstream analyses of genomic plasticity and comparative genomics. However, the gap filling step remains a challenge for genome assembly, often requiring manual intervention. Here, we present GapBlaster, a graphical application to evaluate and close gaps. GapBlaster was developed via Java programming language. The software uses contigs obtained in the assembly of the genome to perform an alignment against a draft of the genome/scaffold, using BLAST or Mummer to close gaps. Then, all identified alignments of contigs that extend through the gaps in the draft sequence are presented to the user for further evaluation via the GapBlaster graphical interface. GapBlaster presents significant results compared to other similar software and has the advantage of offering a graphical interface for manual curation of the gaps. GapBlaster program, the user guide and the test datasets are freely available at https://sourceforge.net/projects/gapblaster2015/. It requires Sun JDK 8 and Blast or Mummer.

  20. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines.

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias; Herzog, Eva

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners.

  1. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners. PMID:29200436

  2. Stochastic resonance driven by time-modulated correlated coloured noise sources in a single-mode laser

    International Nuclear Information System (INIS)

    De-Yi, Chen; Li, Zhang

    2009-01-01

    This paper investigates the phenomenon of stochastic resonance in a single-mode laser driven by time-modulated correlated coloured noise sources. The power spectrum and signal-to-noise ratio R of the laser intensity are calculated by the linear approximation. The effects caused by noise self-correlation time τ 1 , τ 2 and cross-correlated time τ 3 for stochastic resonance are analysed in two ways: τ 1 , τ 2 and τ 3 are taken to be the independent variables and the parameters respectively. The effects of the gain coefficient Γ and loss coefficient K on the stochastic resonance are also discussed. It is found that besides the presence of the standard form and the broad sense of stochastic resonance, the number of extrema in the curve of R versus K is reduced with the increase of the gain coefficient Γ

  3. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  4. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  5. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  6. Transient analysis of a grid connected wind driven induction generator using a real-time simulation platform

    Energy Technology Data Exchange (ETDEWEB)

    Ouhrouche, Mohand [Department of Applied Sciences, University of Quebec at Chicoutimi, Quebec, G7H2B1 (Canada)

    2009-03-15

    Due to its simple construction, ruggedness and low cost, the induction generator driven by a wind turbine and feeding power to the grid appears to be an attractive solution to the problem of growing energy demand in the context of environmental issues. This paper investigates the integration of such a system into the main utility using RT-Lab trademark (Trademark of Opal-RT Technologies) software package running on a simple off-the-shelf PC. This real-time simulation platform is now adopted by many high-tech industries as a real-time laboratory package for rapid control prototyping and for Hardware-in-the-Loop applications. Real-time digital simulation results obtained during contingencies, such as islanding and unbalanced faults are presented and analysed. (author)

  7. The effect of solvent relaxation time constants on free energy gap law for ultrafast charge recombination following photoinduced charge separation.

    Science.gov (United States)

    Mikhailova, Valentina A; Malykhin, Roman E; Ivanov, Anatoly I

    2018-05-16

    To elucidate the regularities inherent in the kinetics of ultrafast charge recombination following photoinduced charge separation in donor-acceptor dyads in solutions, the simulations of the kinetics have been performed within the stochastic multichannel point-transition model. Increasing the solvent relaxation time scales has been shown to strongly vary the dependence of the charge recombination rate constant on the free energy gap. In slow relaxing solvents the non-equilibrium charge recombination occurring in parallel with solvent relaxation is very effective so that the charge recombination terminates at the non-equilibrium stage. This results in a crucial difference between the free energy gap laws for the ultrafast charge recombination and the thermal charge transfer. For the thermal reactions the well-known Marcus bell-shaped dependence of the rate constant on the free energy gap is realized while for the ultrafast charge recombination only a descending branch is predicted in the whole area of the free energy gap exceeding 0.2 eV. From the available experimental data on the population kinetics of the second and first excited states for a series of Zn-porphyrin-imide dyads in toluene and tetrahydrofuran solutions, an effective rate constant of the charge recombination into the first excited state has been calculated. The obtained rate constant being very high is nearly invariable in the area of the charge recombination free energy gap from 0.2 to 0.6 eV that supports the theoretical prediction.

  8. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  9. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  10. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  11. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  12. Forensics of subhalo-stream encounters: the three phases of gap growth

    Science.gov (United States)

    Erkal, Denis; Belokurov, Vasily

    2015-06-01

    There is hope to discover dark matter subhaloes free of stars (predicted by the current theory of structure formation) by observing gaps they produce in tidal streams. In fact, this is the most promising technique for dark substructure detection and characterization as such gaps grow with time, magnifying small perturbations into clear signatures observable by ongoing and planned Galaxy surveys. To facilitate such future inference, we develop a comprehensive framework for studies of the growth of the stream density perturbations. Starting with simple assumptions and restricting to streams on circular orbits, we derive analytic formulae that describe the evolution of all gap properties (size, density contrast, etc.) at all times. We uncover complex, previously unnoticed behaviour, with the stream initially forming a density enhancement near the subhalo impact point. Shortly after, a gap forms due to the relative change in period induced by the subhalo's passage. There is an intermediate regime where the gap grows linearly in time. At late times, the particles in the stream overtake each other, forming caustics, and the gap grows like √{t}. In addition to the secular growth, we find that the gap oscillates as it grows due to epicyclic motion. We compare this analytic model to N-body simulations and find an impressive level of agreement. Importantly, when analysing the observation of a single gap we find a large degeneracy between the subhalo mass, the impact geometry and kinematics, the host potential, and the time since flyby.

  13. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  14. Time-resolved PIV measurements of the atmospheric boundary layer over wind-driven surface waves

    Science.gov (United States)

    Markfort, Corey; Stegmeir, Matt

    2017-11-01

    Complex interactions at the air-water interface result in two-way coupling between wind-driven surface waves and the atmospheric boundary layer (ABL). Turbulence generated at the surface plays an important role in aquatic ecology and biogeochemistry, exchange of gases such as oxygen and carbon dioxide, and it is important for the transfer of energy and controlling evaporation. Energy transferred from the ABL promotes the generation and maintenance of waves. A fraction of the energy is transferred to the surface mixed layer through the generation of turbulence. Energy is also transferred back to the ABL by waves. There is a need to quantify the details of the coupled boundary layers of the air-water system to better understand how turbulence plays a role in the interactions. We employ time-resolved PIV to measure the detailed structure of the air and water boundary layers under varying wind and wave conditions in the newly developed IIHR Boundary-Layer Wind-Wave Tunnel. The facility combines a 30-m long recirculating water channel with an open-return boundary layer wind tunnel. A thick turbulent boundary layer is developed in the 1 m high air channel, over the water surface, allowing for the study of boundary layer turbulence interacting with a wind-driven wave field.

  15. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  16. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  17. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  18. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  19. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  20. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  1. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    Science.gov (United States)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  2. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  3. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  4. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  5. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  6. Influence of Interface Gap on the Stress Behaviour of Smart Single Lap Joints Under Time Harmonic Load

    Directory of Open Access Journals (Sweden)

    Ivanova Jordanka

    2017-06-01

    Full Text Available Adhesive joints are frequently used in different composite structures due to their improved mechanical performance and better understanding of the failure mechanics. The application of such structures can be seen in aerospace and high technology components. The authors developed and applied modified shear lag analysis to investigate the hygrothermalpiezoelectric response of a smart single lap joint at environmental conditions (with/without an interface gap along the overlap zone and under dynamic time harmonic mechanical and electric loads. The main key is the study of the appearance of possible delamination along the interface. As illustrative examples, the analytical closed form solution of the structure shear and the axial stresses response, as well as the interface debond length, including influence of mechanical, piezoelectric, thermal characteristics and frequencies is performed and discussed. All results are presented in figures. The comparison of the shear stress and electric fields for both cases of overlap zone (continuous or with a gap is also shown in figures and discussed.

  7. Data-Driven Cyber-Physical Systems via Real-Time Stream Analytics and Machine Learning

    OpenAIRE

    Akkaya, Ilge

    2016-01-01

    Emerging distributed cyber-physical systems (CPSs) integrate a wide range of heterogeneous components that need to be orchestrated in a dynamic environment. While model-based techniques are commonly used in CPS design, they be- come inadequate in capturing the complexity as systems become larger and extremely dynamic. The adaptive nature of the systems makes data-driven approaches highly desirable, if not necessary.Traditionally, data-driven systems utilize large volumes of static data sets t...

  8. Coherent states of the driven Rydberg atom: Quantum-classical correspondence of periodically driven systems

    International Nuclear Information System (INIS)

    Vela-Arevalo, Luz V.; Fox, Ronald F.

    2005-01-01

    A methodology to calculate generalized coherent states for a periodically driven system is presented. We study wave packets constructed as a linear combination of suitable Floquet states of the three-dimensional Rydberg atom in a microwave field. The driven coherent states show classical space localization, spreading, and revivals and remain localized along the classical trajectory. The microwave strength and frequency have a great effect in the localization of Floquet states, since quasienergy avoided crossings produce delocalization of the Floquet states, showing that tuning of the parameters is very important. Using wavelet-based time-frequency analysis, the classical phase-space structure is determined, which allows us to show that the driven coherent state is located in a large regular region in which the z coordinate is in resonance with the external field. The expectation values of the wave packet show that the driven coherent state evolves along the classical trajectory

  9. Where do inmmigrants fare worse? Modeling workplace wage gap variation with longitudinal employer-employee data.

    Science.gov (United States)

    Tomaskovic-Devey, Donald; Hällsten, Martin; Avent-Holt, Dustin

    2015-01-01

    The authors propose a strategy for observing and explaining workplace variance in categorically linked inequalities. Using Swedish economy-wide linked employer-employee panel data, the authors examine variation in workplace wage inequalities between native Swedes and non-Western immigrants. Consistent with relational inequality theory, the authors' findings are that immigrant-native wage gaps vary dramatically across workplaces, even net of strong human capital controls. The authors also find that, net of observed and fixed-effect controls for individual traits, workplace immigrant-native wage gaps decline with increased workplace immigrant employment and managerial representation and increase when job segregation rises. These results are stronger in high-inequality workplaces and for white-collar employees: contexts in which one expects status-based claims on organizational resources, the central causal mechanism identified by relational inequality theory, to be stronger. The authors conclude that workplace variation in the non-Western immigrant-native wage gaps is contingent on organizational variationin the relative power of groups and the institutional context in which that power is exercised.

  10. Bridging the Gap: Ideas for water sustainability in the western United States

    Science.gov (United States)

    Tidwell, V. C.; Passell, H. D.; Roach, J. D.

    2012-12-01

    Incremental improvements in water sustainability in the western U.S. may not be able to close the growing gap between increasing freshwater demand, climate driven variability in freshwater supply, and growing environmental consciousness. Incremental improvements include municipal conservation, improvements to irrigation technologies, desalination, water leasing, and others. These measures, as manifest today in the western U.S., are successful in themselves but limited in their ability to solve long term water scarcity issues. Examples are plainly evident and range from the steady and long term decline of important aquifers and their projected inability to provide water for future agricultural irrigation, projected declines in states' abilities to meet legal water delivery obligations between states, projected shortages of water for energy production, and others. In many cases, measures that can close the water scarcity gap have been identified, but often these solutions simply shift the gap from water to some other sector, e.g., economics. Saline, brackish or produced water purification, for example, could help solve western water shortages in some areas, but will be extremely expensive, and so shift the gap from water to economics. Transfers of water out of agriculture could help close the water scarcity gap in other areas; however, loss of agriculture will shift the gap to regional food security. All these gaps, whether in water, economics, food security, or other sectors, will have a negative impact on the western states. Narrowing these future gaps requires both technical and policy solutions as well as tools to understand the tradeoffs. Here we discuss several examples from across the western U.S. that span differing scales and decision spaces. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear

  11. Shutdown dose rate analysis with CAD geometry, Cartesian/tetrahedral mesh, and advanced variance reduction

    International Nuclear Information System (INIS)

    Biondo, Elliott D.; Davis, Andrew; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A CAD-based shutdown dose rate analysis workflow has been implemented. • Cartesian and superimposed tetrahedral mesh are fully supported. • Biased and unbiased photon source sampling options are available. • Hybrid Monte Carlo/deterministic techniques accelerate photon transport. • The workflow has been validated with the FNG-ITER benchmark problem. - Abstract: In fusion energy systems (FES) high-energy neutrons born from burning plasma activate system components to form radionuclides. The biological dose rate that results from photons emitted by these radionuclides after shutdown—the shutdown dose rate (SDR)—must be quantified for maintenance planning. This can be done using the Rigorous Two-Step (R2S) method, which involves separate neutron and photon transport calculations, coupled by a nuclear inventory analysis code. The geometric complexity and highly attenuating configuration of FES motivates the use of CAD geometry and advanced variance reduction for this analysis. An R2S workflow has been created with the new capability of performing SDR analysis directly from CAD geometry with Cartesian or tetrahedral meshes and with biased photon source sampling, enabling the use of the Consistent Adjoint Driven Importance Sampling (CADIS) variance reduction technique. This workflow has been validated with the Frascati Neutron Generator (FNG)-ITER SDR benchmark using both Cartesian and tetrahedral meshes and both unbiased and biased photon source sampling. All results are within 20.4% of experimental values, which constitutes satisfactory agreement. Photon transport using CADIS is demonstrated to yield speedups as high as 8.5·10"5 for problems using the FNG geometry.

  12. Real-time diagnostics of fast light ion beams accelerated by a sub-nanosecond laser

    Czech Academy of Sciences Publication Activity Database

    Margarone, Daniele; Krása, Josef; Picciotto, A.; Prokůpek, Jan

    2011-01-01

    Roč. 56, č. 2 (2011), s. 137-141 ISSN 0029-5922 R&D Projects: GA ČR(CZ) GAP205/11/1165 EU Projects: European Commission(XE) 212105 - ELI-PP Institutional research plan: CEZ:AV0Z10100523 Keywords : laser-driven acceleration * ion beams * real-time diagnostics Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.389, year: 2011 http://www.nukleonika.pl/www/back/full/vol56_2011/v56n2p137f.pdf

  13. Data Driven Economic Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Masoud Kheradmandi

    2018-04-01

    Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.

  14. Data-driven strategies for robust forecast of continuous glucose monitoring time-series.

    Science.gov (United States)

    Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa

    2017-07-01

    Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.

  15. Optimal design and control of solar driven air gap membrane distillation desalination systems

    International Nuclear Information System (INIS)

    Chen, Yih-Hang; Li, Yu-Wei; Chang, Hsuan

    2012-01-01

    Highlights: ► Air gap membrane distillation unit was used in the desalination plants. ► Aspen Custom Molder was used to simulate each unit of desalination plants. ► Design parameters were investigated to obtain the minimum total annual cost. ► The control structure was proposed to operate desalination plants all day long. -- Abstract: A solar heated membrane distillation desalination system is constructed of solar collectors and membrane distillation devices for increasing pure water productivity. This technically and economically feasible system is designed to use indirect solar heat to drive membrane distillation processes to overcome the unstable supply of solar radiation from sunrise to sunset. The solar heated membrane distillation desalination system in the present study consisted of hot water storage devices, heat exchangers, air gap membrane distillation units, and solar collectors. Aspen Custom Molder (ACM) software was used to model and simulate each unit and establish the cost function of a desalination plant. From Design degree of freedom (DOF) analysis, ten design parameters were investigated to obtain the minimum total annual cost (TAC) with fixed pure water production rate. For a given solar energy density profile of typical summer weather, the minimal TAC per 1 m 3 pure water production can be found at 500 W/m 2 by varying the solar energy intensity. Therefore, we proposed two modes for controlling the optimal design condition of the desalination plant; day and night. In order to widen the operability range of the plant, the sensitivity analysis was used to retrofit the original design point to lower the effluent temperature from the solar collector by increasing the hot water recycled stream. The simulation results show that the pure water production can be maintained at a very stable level whether in sunny or cloudy weather.

  16. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    International Nuclear Information System (INIS)

    Ceder, M.

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  17. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    Energy Technology Data Exchange (ETDEWEB)

    Ceder, M

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  18. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  19. Perbandingan Post Stack TIME Migration Metode Finite Difference dan Metode Kirchoff dengan Parameter Gap Dekonvolusi Data Seismik Darat 2d Line “Srda”

    OpenAIRE

    Dynza Anggary, Sheyza Rery; Danusaputro, Hernowo; Harmoko, Udi

    2015-01-01

    Analysis on Post Stack Time Migration (Post-STM) with finite difference method and Kirchoff method with determine gap parameter on deconvolution after stack had been applied to 2D land seismic at line “SRDA”. This research had purpose to applied seismic data processing to get subsurface imaging with high signal-to-noise ratio and analyze how the gap parameter corresponding on deconvolution after stack, and to determine which the appropriate method of migration between migration finite differe...

  20. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.