WorldWideScience

Sample records for time variable model

  1. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  2. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  3. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  4. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  7. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  8. Modelling a variable valve timing spark ignition engine using different neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beham, M. [BMW AG, Munich (Germany); Yu, D.L. [John Moores University, Liverpool (United Kingdom). Control Systems Research Group

    2004-10-01

    In this paper different neural networks (NN) are compared for modelling a variable valve timing spark-ignition (VVT SI) engine. The overall system is divided for each output into five neural multi-input single output (MISO) subsystems. Three kinds of NN, multilayer Perceptron (MLP), pseudo-linear radial basis function (PLRBF), and local linear model tree (LOLIMOT) networks, are used to model each subsystem. Real data were collected when the engine was under different operating conditions and these data are used in training and validation of the developed neural models. The obtained models are finally tested in a real-time online model configuration on the test bench. The neural models run independently of the engine in parallel mode. The model outputs are compared with process output and compared among different models. These models performed well and can be used in the model-based engine control and optimization, and for hardware in the loop systems. (author)

  9. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  10. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  11. Three-Factor Market-Timing Models with Fama and French's Spread Variables

    Directory of Open Access Journals (Sweden)

    Joanna Olbryś

    2010-01-01

    Full Text Available The traditional performance measurement literature has attempted to distinguish security selection, or stock-picking ability, from market-timing, or the ability to predict overall market returns. However, the literature finds that it is not easy to separate ability into such dichotomous categories. Some researchers have developed models that allow the decomposition of manager performance into market-timing and selectivity skills. The main goal of this paper is to present modified versions of classical market-timing models with Fama and French’s spread variables SMB and HML, in the case of Polish equity mutual funds. (original abstract

  12. A stochastic fractional dynamics model of space-time variability of rain

    Science.gov (United States)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  13. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  14. An Epidemic Model of Computer Worms with Time Delay and Variable Infection Rate

    Directory of Open Access Journals (Sweden)

    Yu Yao

    2018-01-01

    Full Text Available With rapid development of Internet, network security issues become increasingly serious. Temporary patches have been put on the infectious hosts, which may lose efficacy on occasions. This leads to a time delay when vaccinated hosts change to susceptible hosts. On the other hand, the worm infection is usually a nonlinear process. Considering the actual situation, a variable infection rate is introduced to describe the spread process of worms. According to above aspects, we propose a time-delayed worm propagation model with variable infection rate. Then the existence condition and the stability of the positive equilibrium are derived. Due to the existence of time delay, the worm propagation system may be unstable and out of control. Moreover, the threshold τ0 of Hopf bifurcation is obtained. The worm propagation system is stable if time delay is less than τ0. When time delay is over τ0, the system will be unstable. In addition, numerical experiments have been performed, which can match the conclusions we deduce. The numerical experiments also show that there exists a threshold in the parameter a, which implies that we should choose appropriate infection rate β(t to constrain worm prevalence. Finally, simulation experiments are carried out to prove the validity of our conclusions.

  15. Constructing the reduced dynamical models of interannual climate variability from spatial-distributed time series

    Science.gov (United States)

    Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random

  16. MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK

    International Nuclear Information System (INIS)

    MacLeod, C. L.; Ivezic, Z.; Bullock, E.; Kimball, A.; Sesar, B.; Westman, D.; Brooks, K.; Gibson, R.; Becker, A. C.; Kochanek, C. S.; Kozlowski, S.; Kelly, B.; De Vries, W. H.

    2010-01-01

    We model the time variability of ∼9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale (τ) and an asymptotic rms variability on long timescales (SF ∞ ). We searched for correlations between these two variability parameters and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF ∞ to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF ∞ and black hole mass with a power-law index of 0.18 ± 0.03, independent of the anti-correlation with luminosity. We find that τ increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 ± 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between the full sample and the small subsample of quasars detected

  17. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    Science.gov (United States)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  18. Time variability of X-ray binaries: observations with INTEGRAL. Modeling

    International Nuclear Information System (INIS)

    Cabanac, Clement

    2007-01-01

    The exact origin of the observed X and Gamma ray variability in X-ray binaries is still an open debate in high energy astrophysics. Among others, these objects are showing aperiodic and quasi-periodic luminosity variations on timescales as small as the millisecond. This erratic behavior must put constraints on the proposed emission processes occurring in the vicinity of the neutrons star or the stellar mass black-hole held by these objects. We propose here to study their behavior following 3 different ways: first we examine the evolution of a particular X-ray source discovered by INTEGRAL, IGR J19140+0951. Using timing and spectral data given by different instruments, we show that the source type is plausibly consistent with a High Mass X-ray Binary hosting a neutrons star. Subsequently, we propose a new method dedicated to the study of timing data coming from coded mask aperture instruments. Using it on INTEGRAL/ISGRI real data, we detect the presence of periodic and quasi-periodic features in some pulsars and micro-quasars at energies as high as a hundred keV. Finally, we suggest a model designed to describe the low frequency variability of X-ray binaries in their hardest state. This model is based on thermal comptonization of soft photons by a warm corona in which a pressure wave is propagating in cylindrical geometry. By computing both numerical simulations and analytical solution, we show that this model should be suitable to describe some of the typical features observed in X-ray binaries power spectra in their hard state and their evolution such as aperiodic noise and low frequency quasi-periodic oscillations. (author) [fr

  19. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  20. Pulse timing for cataclysmic variables

    International Nuclear Information System (INIS)

    Chester, T.J.

    1979-01-01

    It is shown that present pulse timing measurements of cataclysmic variables can be explained by models of accretion disks in these systems, and thus such measurements can constrain disk models. The model for DQ Her correctly predicts the amplitude variation of the continuum pulsation and can also perhaps explain the asymmetric amplitude of the pulsed lambda4686 emission line. Several other predictions can be made from the model. In particular, if pulse timing measurements that resolve emission lines both in wavelength and in binary phase can be made, the projected orbital radius of the white dwarf could be deduced

  1. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    Science.gov (United States)

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  2. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  3. Modeling category-level purchase timing with brand-level marketing variables

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard)

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from

  4. Modeling category-level purchase timing with brand-level marketing variables

    OpenAIRE

    Fok, D.; Paap, R.

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to aver...

  5. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  6. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  7. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions.

    Science.gov (United States)

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-04-28

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.

  8. Pure radiation in space-time models that admit integration of the eikonal equation by the separation of variables method

    Science.gov (United States)

    Osetrin, Evgeny; Osetrin, Konstantin

    2017-11-01

    We consider space-time models with pure radiation, which admit integration of the eikonal equation by the method of separation of variables. For all types of these models, the equations of the energy-momentum conservation law are integrated. The resulting form of metric, energy density, and wave vectors of radiation as functions of metric for all types of spaces under consideration is presented. The solutions obtained can be used for any metric theories of gravitation.

  9. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  10. Variable elasticity of substituition in a discrete time Solow–Swan growth model with differential saving

    International Nuclear Information System (INIS)

    Brianzoni, Serena; Mammana, Cristiana; Michetti, Elisabetta

    2012-01-01

    Highlights: ► One dimensional piecewise smooth map: border collision bifurcations. ► Numerical simulations: complex dynamics. ► Ves production function in the solow–swan growth model and comparison with the ces production function. - Abstract: We study the dynamics shown by the discrete time neoclassical one-sector growth model with differential savings as in Bohm and Kaas while assuming VES production function in the form given by Revankar . It is shown that the model can exhibit unbounded endogenous growth despite the absence of exogenous technical change and the presence of non-reproducible factors if the elasticity of substitution is greater than one. We then consider parameters range related to non-trivial dynamics (i.e. the elasticity of substitution in less than one and shareholders save more than workers) and we focus on local and global bifurcations causing the transition to more and more complex asymptotic dynamics. In particular, as our map is non-differentiable in a subset of the states space, we show that border collision bifurcations occur. Several numerical simulations support the analysis.

  11. Effects of Variable Production Rate and Time-Dependent Holding Cost for Complementary Products in Supply Chain Model

    Directory of Open Access Journals (Sweden)

    Mitali Sarkar

    2017-01-01

    Full Text Available Recently, a major trend is going to redesign a production system by controlling or making variable the production rate within some fixed interval to maintain the optimal level. This strategy is more effective when the holding cost is time-dependent as it is interrelated with holding duration of products and rate of production. An effort is made to make a supply chain model (SCM to show the joint effect of variable production rate and time-varying holding cost for specific type of complementary products, where those products are made by two different manufacturers and a common retailer makes them bundle and sells bundles to end customers. Demand of each product is specified by stochastic reservation prices with a known potential market size. Those players of the SCM are considered with unequal power. Stackelberg game approach is employed to obtain global optimum solution of the model. An illustrative numerical example, graphical representation, and managerial insights are given to illustrate the model. Results prove that variable production rate and time-dependent holding cost save more than existing literature.

  12. Travel time variability and rational inattention

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Jiang, Gege

    2017-01-01

    This paper sets up a rational inattention model for the choice of departure time for a traveler facing random travel time. The traveler chooses how much information to acquire about the travel time out-come before choosing departure time. This reduces the cost of travel time variability compared...

  13. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

    Directory of Open Access Journals (Sweden)

    Catherine Rhian Gwyn Jones

    2011-12-01

    Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

  14. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  15. Supervised learning based model for predicting variability-induced timing errors

    NARCIS (Netherlands)

    Jiao, X.; Rahimi, A.; Narayanaswamy, B.; Fatemi, H.; Pineda de Gyvez, J.; Gupta, R.K.

    2015-01-01

    Circuit designers typically combat variations in hardware and workload by increasing conservative guardbanding that leads to operational inefficiency. Reducing this excessive guardband is highly desirable, but causes timing errors in synchronous circuits. We propose a methodology for supervised

  16. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

  17. Time series analysis of dengue incidence in Guadeloupe, French West Indies: Forecasting models using climate variables as predictors

    Directory of Open Access Journals (Sweden)

    Ruche Guy

    2011-06-01

    better than humidity and rainfall. SARIMA models using climatic data as independent variables could be easily incorporated into an early (3 months-ahead and reliably monitoring system of dengue outbreaks. This approach which is practicable for a surveillance system has public health implications in helping the prediction of dengue epidemic and therefore the timely appropriate and efficient implementation of prevention activities.

  18. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    Science.gov (United States)

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Hybridizing fuzzy control and timed automata for modeling variable structure fuzzy systems

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.; Vitiello, A.

    2010-01-01

    During the past several years, fuzzy control has emerged as one of the most suitable and efficient methods for designing and developing complex systems in environments characterized by high level of uncertainty and imprecision. Nowadays, this methodology is used to model systems in several

  20. Time-dependent excitation and ionization modelling of absorption-line variability due to GRB080310

    DEFF Research Database (Denmark)

    Vreeswijk, P.M.; De Cia, A.; Jakobsson, P.

    2013-01-01

    .42743. To estimate the rest-frame afterglow brightness as a function of time, we use a combination of the optical VRI photometry obtained by the RAPTOR-T telescope array, which is presented in this paper, and Swift's X-Ray Telescope (XRT) observations. Excitation alone, which has been successfully applied...

  1. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  2. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Smearing model and restoration of star image under conditions of variable angular velocity and long exposure time.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin

    2014-03-10

    The star tracker is one of the most promising attitude measurement devices widely used in spacecraft for its high accuracy. High dynamic performance is becoming its major restriction, and requires immediate focus and promotion. A star image restoration approach based on the motion degradation model of variable angular velocity is proposed in this paper. This method can overcome the problem of energy dispersion and signal to noise ratio (SNR) decrease resulting from the smearing of the star spot, thus preventing failed extraction and decreased star centroid accuracy. Simulations and laboratory experiments are conducted to verify the proposed methods. The restoration results demonstrate that the described method can recover the star spot from a long motion trail to the shape of Gaussian distribution under the conditions of variable angular velocity and long exposure time. The energy of the star spot can be concentrated to ensure high SNR and high position accuracy. These features are crucial to the subsequent star extraction and the whole performance of the star tracker.

  4. On the ""early-time"" evolution of variables relevant to turbulence models for Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant parameters before the fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of the mixing between two interpenetrating fluids to define the initial profiles for the turbulence model parameters. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted initial profiles for the turbulence model parameters and initial profiles of the parameters obtained from low Atwood number three dimensional simulations show reasonable agreement.

  5. Variability of the 2014-present inflation source at Mauna Loa volcano revealed using time-dependent modeling

    Science.gov (United States)

    Johanson, I. A.; Miklius, A.; Okubo, P.; Montgomery-Brown, E. K.

    2017-12-01

    Mauna Loa volcano is the largest active volcano on earth and in the 20thcentury produced roughly one eruption every seven years. The 33-year quiescence since its last eruption 1984 has been punctuated by three inflation episodes where magma likely entered the shallow plumbing system, but was not erupted. The most recent began in 2014 and is ongoing. Unlike prior inflation episodes, the current one is accompanied by a significant increase in shallow seismicity, a pattern that is similar to earlier pre-eruptive periods. We apply the Kalman filter based Network Inversion Filter (NIF) to the 2014-present inflation episode using data from a 27 station continuous GPS network on Mauna Loa. The model geometry consists of a point volume source and tabular, dike-like body, which have previously been shown to provide a good fit to deformation data from a 2004-2009 inflation episode. The tabular body is discretized into 1km x 1km segments. For each day, the NIF solves for the rates of opening on the tabular body segments (subject to smoothing and positivity constraints), volume change rate in the point source, and slip rate on a deep décollement fault surface, which is constrained to a constant (no transient slip allowed). The Kalman filter in the NIF provides for smoothing both forwards and backwards in time. The model shows that the 2014-present inflation episode occurred as several sub-events, rather than steady inflation. It shows some spatial variability in the location of the inflation sub-events. In the model, opening in the tabular body is initially concentrated below the volcano's summit, in an area roughly outlined by shallow seismicity. In October, 2015 opening in the tabular body shifts to be centered beneath the southwest portion of the summit and seismicity becomes concentrated in this area. By late 2016, the opening rate on the tabular body decreases and is once again under the central part of summit. This modeling approach has allowed us to track these

  6. Time-adjusted variable resistor

    Science.gov (United States)

    Heyser, R. C.

    1972-01-01

    Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.

  7. Real Time Hybrid Model Predictive Control for the Current Profile of the Tokamak à Configuration Variable (TCV

    Directory of Open Access Journals (Sweden)

    Izaskun Garrido

    2016-08-01

    Full Text Available Plasma stability is one of the obstacles in the path to the successful operation of fusion devices. Numerical control-oriented codes as it is the case of the widely accepted RZIp may be used within Tokamak simulations. The novelty of this article relies in the hierarchical development of a dynamic control loop. It is based on a current profile Model Predictive Control (MPC algorithm within a multiloop structure, where a MPC is developed at each step so as to improve the Proportional Integral Derivative (PID global scheme. The inner control loop is composed of a PID-based controller that acts over the Multiple Input Multiple Output (MIMO system resulting from the RZIp plasma model of the Tokamak à Configuration Variable (TCV. The coefficients of this PID controller are initially tuned using an eigenmode reduction over the passive structure model. The control action corresponding to the state of interest is then optimized in the outer MPC loop. For the sake of comparison, both the traditionally used PID global controller as well as the multiloop enhanced MPC are applied to the same TCV shot. The results show that the proposed control algorithm presents a superior performance over the conventional PID algorithm in terms of convergence. Furthermore, this enhanced MPC algorithm contributes to extend the discharge length and to overcome the limited power availability restrictions that hinder the performance of advanced tokamaks.

  8. Commuters’ valuation of travel time variability in Barcelona

    OpenAIRE

    Javier Asensio; Anna Matas

    2007-01-01

    The value given by commuters to the variability of travel times is empirically analysed using stated preference data from Barcelona (Spain). Respondents are asked to choose between alternatives that differ in terms of cost, average travel time, variability of travel times and departure time. Different specifications of a scheduling choice model are used to measure the influence of various socioeconomic characteristics. Our results show that travel time variability.

  9. Indo-Pacific Variability on Seasonal to Multidecadal Time Scales. Part I: Intrinsic SST Modes in Models and Observations

    Science.gov (United States)

    Slawinska, Joanna; Giannakis, Dimitrios

    2017-07-01

    The variability of Indo-Pacific SST on seasonal to multidecadal timescales is investigated using a recently introduced technique called nonlinear Laplacian spectral analysis (NLSA). Through this technique, drawbacks associated with ad hoc pre-filtering of the input data are avoided, enabling recovery of low-frequency and intermittent modes not previously accessible via classical approaches. Here, a multiscale hierarchy of spatiotemporal modes is identified for Indo-Pacific SST in millennial control runs of CCSM4 and CM3 and in HadISST data. On interannual timescales, a mode with spatiotemporal patterns corresponding to the fundamental component of ENSO emerges, along with ENSO-modulated annual modes consistent with combination mode theory. The ENSO combination modes also feature prominent activity in the Indian Ocean, explaining significant fraction of the SST variance in regions associated with the Indian Ocean dipole. A pattern resembling the tropospheric biennial oscillation emerges in addition to ENSO and the associated combination modes. On multidecadal timescales, the dominant NLSA mode in the model data is predominantly active in the western tropical Pacific. The interdecadal Pacific oscillation also emerges as a distinct NLSA mode, though with smaller explained variance than the western Pacific multidecadal mode. Analogous modes on interannual and decadal timescales are also identified in HadISST data for the industrial era, as well as in model data of comparable timespan, though decadal modes are either absent or of degraded quality in these datasets.

  10. A Predictive Model for Time-to-Flowering in the Common Bean Based on QTL and Environmental Variables

    Directory of Open Access Journals (Sweden)

    Mehul S. Bhakta

    2017-12-01

    Full Text Available The common bean is a tropical facultative short-day legume that is now grown in tropical and temperate zones. This observation underscores how domestication and modern breeding can change the adaptive phenology of a species. A key adaptive trait is the optimal timing of the transition from the vegetative to the reproductive stage. This trait is responsive to genetically controlled signal transduction pathways and local climatic cues. A comprehensive characterization of this trait can be started by assessing the quantitative contribution of the genetic and environmental factors, and their interactions. This study aimed to locate significant QTL (G and environmental (E factors controlling time-to-flower in the common bean, and to identify and measure G × E interactions. Phenotypic data were collected from a biparental [Andean × Mesoamerican] recombinant inbred population (F11:14, 188 genotypes grown at five environmentally distinct sites. QTL analysis using a dense linkage map revealed 12 QTL, five of which showed significant interactions with the environment. Dissection of G × E interactions using a linear mixed-effect model revealed that temperature, solar radiation, and photoperiod play major roles in controlling common bean flowering time directly, and indirectly by modifying the effect of certain QTL. The model predicts flowering time across five sites with an adjusted r-square of 0.89 and root-mean square error of 2.52 d. The model provides the means to disentangle the environmental dependencies of complex traits, and presents an opportunity to identify in silico QTL allele combinations that could yield desired phenotypes under different climatic conditions.

  11. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    Science.gov (United States)

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  12. Modelling and multi-objective optimization of a variable valve-timing spark-ignition engine using polynomial neural networks and evolutionary algorithms

    International Nuclear Information System (INIS)

    Atashkari, K.; Nariman-Zadeh, N.; Goelcue, M.; Khalkhali, A.; Jamali, A.

    2007-01-01

    The main reason for the efficiency decrease at part load conditions for four-stroke spark-ignition (SI) engines is the flow restriction at the cross-sectional area of the intake system. Traditionally, valve-timing has been designed to optimize operation at high engine-speed and wide open throttle conditions. Several investigations have demonstrated that improvements at part load conditions in engine performance can be accomplished if the valve-timing is variable. Controlling valve-timing can be used to improve the torque and power curve as well as to reduce fuel consumption and emissions. In this paper, a group method of data handling (GMDH) type neural network and evolutionary algorithms (EAs) are firstly used for modelling the effects of intake valve-timing (V t ) and engine speed (N) of a spark-ignition engine on both developed engine torque (T) and fuel consumption (Fc) using some experimentally obtained training and test data. Using such obtained polynomial neural network models, a multi-objective EA (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism are secondly used for Pareto based optimization of the variable valve-timing engine considering two conflicting objectives such as torque (T) and fuel consumption (Fc). The comparison results demonstrate the superiority of the GMDH type models over feedforward neural network models in terms of the statistical measures in the training data, testing data and the number of hidden neurons. Further, it is shown that some interesting and important relationships, as useful optimal design principles, involved in the performance of the variable valve-timing four-stroke spark-ignition engine can be discovered by the Pareto based multi-objective optimization of the polynomial models. Such important optimal principles would not have been obtained without the use of both the GMDH type neural network modelling and the multi-objective Pareto optimization approach

  13. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.

    2017-01-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...

  14. Additive measures of travel time variability

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2011-01-01

    This paper derives a measure of travel time variability for travellers equipped with scheduling preferences defined in terms of time-varying utility rates, and who choose departure time optimally. The corresponding value of travel time variability is a constant that depends only on preference...... parameters. The measure is unique in being additive with respect to independent parts of a trip. It has the variance of travel time as a special case. Extension is provided to the case of travellers who use a scheduled service with fixed headway....

  15. Can time be a discrete dynamical variable

    International Nuclear Information System (INIS)

    Lee, T.D.

    1983-01-01

    The possibility that time can be regarded as a discrete dynamical variable is examined through all phases of mechanics: from classical mechanics to nonrelativistic quantum mechanics, and to relativistic quantum field theories. (orig.)

  16. 99mTc-Annexin-V uptake in a rat model of variable ischemic severity and reperfusion time

    International Nuclear Information System (INIS)

    Taki, Junichi; Higuchi, Takahiro; Kawashima, Atsuhiro; Muramori, Akira; Nakajima, Kenichi; Tait, J.F.; Matsunari, Ichiro; Vanderheyden, J.L.; Strauss, H.W.

    2007-01-01

    The background of this study was to determine whether mild to moderate ischemia that is not severe enough to induce myocardial infarction will cause myocardial cell damage or apoptosis, the 99m Tc-Annexin-V (Tc-A) uptake was studied in groups of rats with various intervals of coronary occlusion and reperfusion times. After left coronary artery occlusion for 15 min (n=23), 10 min (n=23), or 5 min (n=12), Tc-A (80-150 MBq) was injected at 0.5, 1.5, 6, or 24 h after reperfusion. One hour later, to verify the area at risk, 201 Tl (0.74 MBq) was injected just after left coronary artery re-occlusion and the rats were killed 1 min later. Dual tracer autoradiography was performed to assess Tc-A uptake and area at risk. In all 5-min occlusion and reperfusion models, no significant Tc-A uptake was observed in the area at risk. Tc-A uptake ratios in the 15-min and 10-min ischemia models were 4.46±3.16 and 2.02±0.47 (p=0.078) at 0.5 h after reperfusion, 3.49±1.78 and 1.47±0.11 (p<0.05) at 1.5 h after reperfusion, 1.60±0.43 and 1.34±0.23 (p=0.24) at 6 h after reperfusion, 1.50±0.33 and 1.28±0.33 (p=0.099) at 24 h after reperfusion, respectively. With 15-min ischemia, in 3 of the 5 rats there were a few micro-foci of myocardial cell degeneration and cell infiltration in less than 1% of the ischemic area at 24 h after reperfusion. No significant histological change was observed in rats with 10-min or 5-min ischemia. The data indicate that Tc-A binding depends on the severity of ischemia even without a significant amount of histological change or infarction. (author)

  17. Travel time variability and airport accessibility

    NARCIS (Netherlands)

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2011-01-01

    We analyze the cost of access travel time variability for air travelers. Reliable access to airports is important since the cost of missing a flight is likely to be high. First, the determinants of the preferred arrival times at airports are analyzed. Second, the willingness to pay (WTP) for

  18. Public transport travel time and its variability

    OpenAIRE

    Mazloumi Shomali, Ehsan

    2017-01-01

    Executive Summary Public transport agencies around the world are constantly trying to improve the performance of their service, and to provide passengers with a more reliable service. Two major measures to evaluate the performance of a transit system include travel time and travel time variability. Information on these two measures provides operators with a capacity to identify the problematic locations in a transport system and improve operating plans. Likewise, users can benefit through...

  19. Travel time variability and airport accessibility

    OpenAIRE

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2010-01-01

    This discussion paper resulted in a publication in Transportation Research Part B: Methodological (2011). Vol. 45(10), pages 1545-1559. This paper analyses the cost of access travel time variability for air travelers. Reliable access to airports is important since it is likely that the cost of missing a flight is high. First, the determinants of the preferred arrival times at airports are analyzed, including trip purpose, type of airport, flight characteristics, travel experience, type of che...

  20. Probabilistic model for the spoilage wine yeast Dekkera bruxellensis as a function of pH, ethanol and free SO2 using time as a dummy variable.

    Science.gov (United States)

    Sturm, M E; Arroyo-López, F N; Garrido-Fernández, A; Querol, A; Mercado, L A; Ramirez, M L; Combina, M

    2014-01-17

    The present study uses a probabilistic model to determine the growth/no growth interfaces of the spoilage wine yeast Dekkera bruxellensis CH29 as a function of ethanol (10-15%, v/v), pH (3.4-4.0) and free SO2 (0-50 mg/l) using time (7, 14, 21 and 30 days) as a dummy variable. The model, built with a total of 756 growth/no growth data obtained in a simile wine medium, could have application in the winery industry to determine the wine conditions needed to inhibit the growth of this species. Thereby, at 12.5% of ethanol and pH 3.7 for a growth probability of 0.01, it is necessary to add 30 mg/l of free SO2 to inhibit yeast growth for 7 days. However, the concentration of free SO2 should be raised to 48 mg/l to achieve a probability of no growth of 0.99 for 30 days under the same wine conditions. Other combinations of environmental variables can also be determined using the mathematical model depending on the needs of the industry. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Decomposing the heterogeneity of depression at the person-, symptom-, and time-level : Latent variable models versus multimode principal component analysis

    NARCIS (Netherlands)

    de Vos, Stijn; Wardenaar, Klaas J.; Bos, Elisabeth H.; Wit, Ernst C.; de Jonge, Peter

    2015-01-01

    Background: Heterogeneity of psychopathological concepts such as depression hampers progress in research and clinical practice. Latent Variable Models (LVMs) have been widely used to reduce this problem by identification of more homogeneous factors or subgroups. However, heterogeneity exists at

  2. An alternative approach to exact wave functions for time-dependent coupled oscillator model of charged particle in variable magnetic field

    International Nuclear Information System (INIS)

    Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol

    2010-01-01

    The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.

  3. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  4. Simulation of the Quantity, Variability, and Timing of Streamflow in the Dennys River Basin, Maine, by Use of a Precipitation-Runoff Watershed Model

    Science.gov (United States)

    Dudley, Robert W.

    2008-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Maine Department of Marine Resources Bureau of Sea Run Fisheries and Habitat, began a study in 2004 to characterize the quantity, variability, and timing of streamflow in the Dennys River. The study included a synoptic summary of historical streamflow data at a long-term streamflow gage, collecting data from an additional four short-term streamflow gages, and the development and evaluation of a distributed-parameter watershed model for the Dennys River Basin. The watershed model used in this investigation was the USGS Precipitation-Runoff Modeling System (PRMS). The Geographic Information System (GIS) Weasel was used to delineate the Dennys River Basin and subbasins and derive parameters for their physical geographic features. Calibration of the models used in this investigation involved a four-step procedure in which model output was evaluated against four calibration data sets using computed objective functions for solar radiation, potential evapotranspiration, annual and seasonal water budgets, and daily streamflows. The calibration procedure involved thousands of model runs and was carried out using the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The SCE method reliably produces satisfactory solutions for large, complex optimization problems. The primary calibration effort went into the Dennys main stem watershed model. Calibrated parameter values obtained for the Dennys main stem model were transferred to the Cathance Stream model, and a similar four-step SCE calibration procedure was performed; this effort was undertaken to determine the potential to transfer modeling information to a nearby basin in the same region. The calibrated Dennys main stem watershed model performed with Nash-Sutcliffe efficiency (NSE) statistic values for the calibration period and evaluation period of 0.79 and 0

  5. Modeling the Variable Heliopause Location

    Science.gov (United States)

    Hensley, Kerry

    2018-03-01

    In 2012, Voyager 1 zipped across the heliopause. Five and a half years later, Voyager 2 still hasnt followed its twin into interstellar space. Can models of the heliopause location help determine why?How Far to the Heliopause?Artists conception of the heliosphere with the important structures and boundaries labeled. [NASA/Goddard/Walt Feimer]As our solar system travels through the galaxy, the solar outflow pushes against the surrounding interstellar medium, forming a bubble called the heliosphere. The edge of this bubble, the heliopause, is the outermost boundary of our solar system, where the solar wind and the interstellar medium meet. Since the solar outflow is highly variable, the heliopause is constantly moving with the motion driven by changes inthe Sun.NASAs twin Voyager spacecraft were poisedto cross the heliopause after completingtheir tour of the outer planets in the 1980s. In 2012, Voyager 1 registered a sharp increase in the density of interstellar particles, indicating that the spacecraft had passed out of the heliosphere and into the interstellar medium. The slower-moving Voyager 2 was set to pierce the heliopause along a different trajectory, but so far no measurements have shown that the spacecraft has bid farewell to oursolar system.In a recent study, ateam of scientists led by Haruichi Washimi (Kyushu University, Japan and CSPAR, University of Alabama-Huntsville) argues that models of the heliosphere can help explain this behavior. Because the heliopause location is controlled by factors that vary on many spatial and temporal scales, Washimiand collaborators turn to three-dimensional, time-dependent magnetohydrodynamics simulations of the heliosphere. In particular, they investigate how the position of the heliopause along the trajectories of Voyager 1 and Voyager 2 changes over time.Modeled location of the heliopause along the paths of Voyagers 1 (blue) and 2 (orange). Click for a closer look. The red star indicates the location at which Voyager

  6. Timing variability in children with early-treated congenital hypothyroidism

    NARCIS (Netherlands)

    Kooistra, L.; Snijders, T.A.B.; Schellekens, J.M.H.; Kalverboer, A.F.; Geuze, R.H.

    This study reports on central and peripheral determinants of timing variability in self-paced tapping by children with early-treated congenital hypothyroidism (CH). A theoretical model of the timing of repetitive movements developed by Wing and Kristofferson was applied to estimate the central

  7. Characterising an intense PM pollution episode in March 2015 in France from multi-site approach and near real time data: Climatology, variabilities, geographical origins and model evaluation

    Science.gov (United States)

    Petit, J.-E.; Amodeo, T.; Meleux, F.; Bessagnet, B.; Menut, L.; Grenier, D.; Pellan, Y.; Ockler, A.; Rocq, B.; Gros, V.; Sciare, J.; Favez, O.

    2017-04-01

    During March 2015, a severe and large-scale particulate matter (PM) pollution episode occurred in France. Measurements in near real-time of the major chemical composition at four different urban background sites across the country (Paris, Creil, Metz and Lyon) allowed the investigation of spatiotemporal variabilities during this episode. A climatology approach showed that all sites experienced clear unusual rain shortage, a pattern that is also found on a longer timescale, highlighting the role of synoptic conditions over Wester-Europe. This episode is characterized by a strong predominance of secondary pollution, and more particularly of ammonium nitrate, which accounted for more than 50% of submicron aerosols at all sites during the most intense period of the episode. Pollution advection is illustrated by similar variabilities in Paris and Creil (distant of around 100 km), as well as trajectory analyses applied on nitrate and sulphate. Local sources, especially wood burning, are however found to contribute to local/regional sub-episodes, notably in Metz. Finally, simulated concentrations from Chemistry-Transport model CHIMERE were compared to observed ones. Results highlighted different patterns depending on the chemical components and the measuring site, reinforcing the need of such exercises over other pollution episodes and sites.

  8. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  9. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  11. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  12. Natural climate variability in a coupled model

    International Nuclear Information System (INIS)

    Zebiak, S.E.; Cane, M.A.

    1990-01-01

    Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

  13. Predicting the outbreak of hand, foot, and mouth disease in Nanjing, China: a time-series model based on weather variability

    Science.gov (United States)

    Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing

    2017-10-01

    Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.

  14. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  15. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  16. From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Wasowski, Andrzej

    2017-01-01

    efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source...

  17. A modelling study into the effects of variable valve timing on the gas exchange process and performance of a 4-valve DI homogeneous charge compression ignition (HCCI) engine

    International Nuclear Information System (INIS)

    Mahrous, A-F.M.; Potrzebowski, A.; Wyszynski, M.L.; Xu, H.M.; Tsolakis, A.; Luszcz, P.

    2009-01-01

    Homogeneous charge compression ignition (HCCI) combustion mode is a relatively new combustion technology that can be achieved by using specially designed cams with reduced lift and duration. The auto-ignition in HCCI engine can be facilitated by adjusting the timing of the exhaust-valve-closing and, to some extent, the timing of the intake-valve-opening so as to capture a proportion of the hot exhaust gases in the engine cylinder during the gas exchange process. The effects of variable valve timing strategy on the gas exchange process and performance of a 4-valve direct injection HCCI engine were computationally investigated using a 1D fluid-dynamic engine cycle simulation code. A non-typical intake valve strategy was examined; whereby the intake valves were assumed to be independently actuated with the same valve-lift profile but at different timings. Using such an intake valves strategy, the obtained results showed that the operating range of the exhaust-valve-timing within which the HCCI combustion can be facilitated and maintained becomes much wider than that of the typical intake-valve-timing case. Also it was found that the engine parameters such as load and volumetric efficiency are significantly modified with the use of the non-typical intake-valve-timing. Additionally, the results demonstrated the potential of the non-typical intake-valve strategy in achieving and maintaining the HCCI combustion at much lower loads within a wide range of valve timings. Minimizing the pumping work penalty, and consequently improving the fuel economy, was shown as an advantage of using the non-typical intake-valve-timing with the timing of the early intake valve coupled with a symmetric degree of exhaust-valve-closing timing

  18. A modelling study into the effects of variable valve timing on the gas exchange process and performance of a 4-valve DI homogeneous charge compression ignition (HCCI) engine

    Energy Technology Data Exchange (ETDEWEB)

    Mahrous, A-F.M. [School of Mechanical Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom); Lecturer at the Department of Mechanical Power Engineering, Faculty of Engineering (Shebin El-Kom), Menoufiya University, Shebin El-Kom (Egypt); Potrzebowski, A.; Wyszynski, M.L.; Xu, H.M.; Tsolakis, A.; Luszcz, P. [School of Mechanical Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

    2009-02-15

    Homogeneous charge compression ignition (HCCI) combustion mode is a relatively new combustion technology that can be achieved by using specially designed cams with reduced lift and duration. The auto-ignition in HCCI engine can be facilitated by adjusting the timing of the exhaust-valve-closing and, to some extent, the timing of the intake-valve-opening so as to capture a proportion of the hot exhaust gases in the engine cylinder during the gas exchange process. The effects of variable valve timing strategy on the gas exchange process and performance of a 4-valve direct injection HCCI engine were computationally investigated using a 1D fluid-dynamic engine cycle simulation code. A non-typical intake valve strategy was examined; whereby the intake valves were assumed to be independently actuated with the same valve-lift profile but at different timings. Using such an intake valves strategy, the obtained results showed that the operating range of the exhaust-valve-timing within which the HCCI combustion can be facilitated and maintained becomes much wider than that of the typical intake-valve-timing case. Also it was found that the engine parameters such as load and volumetric efficiency are significantly modified with the use of the non-typical intake-valve-timing. Additionally, the results demonstrated the potential of the non-typical intake-valve strategy in achieving and maintaining the HCCI combustion at much lower loads within a wide range of valve timings. Minimizing the pumping work penalty, and consequently improving the fuel economy, was shown as an advantage of using the non-typical intake-valve-timing with the timing of the early intake valve coupled with a symmetric degree of exhaust-valve-closing timing. (author)

  19. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  20. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  1. Valuing travel time variability: Characteristics of the travel time distribution on an urban road

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Fukuda, Daisuke

    2012-01-01

    This paper provides a detailed empirical investigation of the distribution of travel times on an urban road for valuation of travel time variability. Our investigation is premised on the use of a theoretical model with a number of desirable properties. The definition of the value of travel time...... variability depends on certain properties of the distribution of random travel times that require empirical verification. Applying a range of nonparametric statistical techniques to data giving minute-by-minute travel times for a congested urban road over a period of five months, we show that the standardized...... travel time is roughly independent of the time of day as required by the theory. Except for the extreme right tail, a stable distribution seems to fit the data well. The travel time distributions on consecutive links seem to share a common stability parameter such that the travel time distribution...

  2. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  3. Spot Welding Characterizations With Time Variable

    International Nuclear Information System (INIS)

    Abdul Hafid; Pinitoyo, A.; History; Paidjo, Andryansyah; Sagino, Sudarmin; Tamzil, M.

    2001-01-01

    For obtain spot welding used effective data, this research is made, so that time operational of machine increasing. Welding parameters are material classification, electrical current, and weld time. All of the factors are determined welding quality. If the plate more thick, the time must be longer when the current constant. Another factor as determined welding quality are surface condition of electrode, surface condition of weld material, and material classifications. In this research, the weld machine type IP32A2 VI (110 V), Rivoira trademark is characterized

  4. A comprehensive assessment of memory, delay aversion, timing, inhibition, decision making and variability in attention deficit hyperactivity disorder: advancing beyond the three-pathway models.

    Science.gov (United States)

    Coghill, D R; Seth, S; Matthews, K

    2014-07-01

    Although attention deficit hyperactivity disorder (ADHD) has been associated with a broad range of deficits across various neuropsychological domains, most studies have assessed only a narrow range of neuropsychological functions. Direct cross-domain comparisons are rare, with almost all studies restricted to less than four domains. Therefore, the relationships between these various domains remain undefined. In addition, almost all studies included previously medicated participants, limiting the conclusions that can be drawn. We present the first study to compare a large cohort of medication-naive boys with ADHD with healthy controls on a broad battery of neuropsychological tasks, assessing six key domains of neuropsychological functioning. The neuropsychological functioning of 83 medication-naive boys with well-characterized ADHD (mean age 8.9 years) was compared with that of 66 typically developing (TYP) boys (mean age 9.0 years) on a broad battery of validated neuropsychological tasks. Data reduction using complementary factor analysis (CFA) confirmed six distinct neuropsychological domains: working memory, inhibition, delay aversion, decision making, timing and response variability. Boys with ADHD performed less well across all six domains although, for each domain, only a minority of boys with ADHD had a deficit [effect size (% with deficit) ADHD versus TYP: working memory 0.95 (30.1), inhibition 0.61 (22.9), delay aversion 0.82 (36.1), decision making 0.55 (20.5), timing 0.71 (31.3), response variability 0.37 (18.1)]. The clinical syndrome of ADHD is neuropsychologically heterogeneous. These data highlight the complexity of the relationships between the different neuropsychological profiles associated with ADHD and the clinical symptoms and functional impairment.

  5. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  6. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  7. IInvestigations of space-time variability of the sea level in the Barents Sea and the White Sea by satellite altimetry data and results of hydrodynamic modelling

    Science.gov (United States)

    Lebedev, S. A.; Zilberstein, O. I.; Popov, S. K.; Tikhonova, O. V.

    2003-04-01

    The problem of retrieving of the sea level anomalies in the Barents and White Seas from satellite can be considered as two different problems. The first one is to calculate the anomalies of sea level along the trek taking into account all amendments including tidal heights. The second one is to obtain of fields of the sea level anomalies on the grid over one cycle of the exact repeat altimetry mission. Experience results show that there is preferable to use the regional tidal model for calculating tidal heights. To construct of the anomalies fields of the sea level during the exact repeat mission (cycle 35 days for ERS-1 and ERS-2), when a density of the coverage of the area of water of the Barents and White Seas by satellite measurements achieves maximum. It is necessary to solve the problem of the error minimum. This error is based by the temporal difference of the measurements over one cycle and by the specific of the hydrodynamic regime of the both seas (tidal, storm surge variations, tidal currents). To solve this problem it is assumed to use the results of the hydrodynamic modeling. The error minimum is preformed by the regression of the model results and satellite measurements. As a version it is considered the possibility of the utilizing of the neuronet obtained by the model results to construct maps of the sea level anomalies. The comparison of the model results and the calculation of the satellite altimetry variability of the sea level of Barents and White Seas shows a good coincidence between them. The satellite altimetry data of ERS-1/2 and TOPEX/POSEIDON of Ocean Altimeter Pathfinder Project (NASA/GSFC) has been used in this study. Results of the regional tidal model computations and three dimensional baroclinic model created in the Hydrometeocenter have been used as well. This study also exploited the atmosphere date of the Project REANALYSIS. The research was undertaken with partial support from the Russian Basic Research Foundation (Project No. 01-07-90106).

  8. Latent variable models are network models.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  9. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  10. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  11. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  12. Is Reaction Time Variability in ADHD Mainly at Low Frequencies?

    Science.gov (United States)

    Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.

    2013-01-01

    Background: Intraindividual variability in reaction times (RT variability) has garnered increasing interest as an indicator of cognitive and neurobiological dysfunction in children with attention deficit hyperactivity disorder (ADHD). Recent theory and research has emphasized specific low-frequency patterns of RT variability. However, whether…

  13. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  14. Time variable cosmological constants from the age of universe

    International Nuclear Information System (INIS)

    Xu Lixin; Lu Jianbo; Li Wenbo

    2010-01-01

    In this Letter, time variable cosmological constant, dubbed age cosmological constant, is investigated motivated by the fact: any cosmological length scale and time scale can introduce a cosmological constant or vacuum energy density into Einstein's theory. The age cosmological constant takes the form ρ Λ =3c 2 M P 2 /t Λ 2 , where t Λ is the age or conformal age of our universe. The effective equation of state (EoS) of age cosmological constant are w Λ eff =-1+2/3 (√(Ω Λ ))/c and w Λ eff =-1+2/3 (√(Ω Λ ))/c (1+z) when the age and conformal age of universe are taken as the role of cosmological time scales respectively. The EoS are the same as the so-called agegraphic dark energy models. However, the evolution histories are different from the agegraphic ones for their different evolution equations.

  15. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  16. [Twenty-four hour time and frequency domain variability of systolic blood pressure and heart rate in an experimental model of arterial hypertension plus obesity].

    Science.gov (United States)

    Pelat, M; Verwaerde, P; Lazartiques, E; Cabrol, P; Galitzky, J; Berlan, M; Montastruc, J L; Senard, J M

    1998-08-01

    Modifications of heart rate (HR) and systolic blood pressure (SBP) variabilities (V) have been reported in the human syndrome arterial hypertension plus insulin-resistance. The aim of this study was to characterize the 24 h SBPV and HRV in both time and frequency domains during weight increase in dogs fed ad libitum with a high fat diet. Implantable transmitter units for measurement of blood pressure and heart rate were surgically implanted in five beagle male dogs. BP and HR were continuously recorded using telemetric measurements during 24 hours, before and after 6 and 9 weeks of hypercaloric diet in quiet animals submitted to a 12h light-dark cycle. To study nychtemeral cycle of SBP and HR, two periods were chosen: day (from 6.00 h to 19.00 h) and night (from 23.00 h to 6.00 h). Spontaneous baroreflex efficiency was measured using the sequence method. Spectral variability of HR and SBP was analyzed using a fast Fourier transformation on 512 consecutive values and normalized units of low (LF: 50-150 mHz, reflecting sympathetic activity) and high (HF: respiratory rate +/- 50 mHz, reflecting parasympathetic activity) frequency bands were calculated. The energy of total spectrum (from 0.004 to 1 Hz) was also studied. Body weight (12.4 +/- 0.9 vs 14.9 +/- 0.9 kg, p vs 147 +/- 1 mmHg, p vs night: 71 +/- 1 bpm) but not after 9 weeks (day: 91 +/- 4 bpm ; night: 86 +/- 2 bpm). Concomitantly, the efficiency of spontaneous baroreflex decreased at 6 weeks (36 +/- 1 vs 42 +/- 2 mmHg/ms, p energy of HRV was found after 6 but not after 9 weeks. LF energy of SBPV was increased at 6 but not at 9 weeks (table). [table: see text] In conclusion, this study shows that an hyperlipidic and hypercaloric diet induces transient variations in autonomic nervous system activity which could be the physiopathological link between obesity, insulin-resistance and arterial hypertension.

  17. Changes in the structure and function of northern Alaskan ecosystems when considering variable leaf-out times across groupings of species in a dynamic vegetation model

    Science.gov (United States)

    Euskirchen, E.S.; Carman, T.B.; McGuire, Anthony David

    2013-01-01

    The phenology of arctic ecosystems is driven primarily by abiotic forces, with temperature acting as the main determinant of growing season onset and leaf budburst in the spring. However, while the plant species in arctic ecosystems require differing amounts of accumulated heat for leaf-out, dynamic vegetation models simulated over regional to global scales typically assume some average leaf-out for all of the species within an ecosystem. Here, we make use of air temperature records and observations of spring leaf phenology collected across dominant groupings of species (dwarf birch shrubs, willow shrubs, other deciduous shrubs, grasses, sedges, and forbs) in arctic and boreal ecosystems in Alaska. We then parameterize a dynamic vegetation model based on these data for four types of tundra ecosystems (heath tundra, shrub tundra, wet sedge tundra, and tussock tundra), as well as ecotonal boreal white spruce forest, and perform model simulations for the years 1970 -2100. Over the course of the model simulations, we found changes in ecosystem composition under this new phenology algorithm compared to simulations with the previous phenology algorithm. These changes were the result of the differential timing of leaf-out, as well as the ability for the groupings of species to compete for nitrogen and light availability. Regionally, there were differences in the trends of the carbon pools and fluxes between the new phenology algorithm and the previous phenology algorithm, although these differences depended on the future climate scenario. These findings indicate the importance of leaf phenology data collection by species and across the various ecosystem types within the highly heterogeneous Arctic landscape, and that dynamic vegetation models should consider variation in leaf-out by groupings of species within these ecosystems to make more accurate projections of future plant distributions and carbon cycling in Arctic regions.

  18. Chaos synchronization in time-delayed systems with parameter mismatches and variable delay times

    International Nuclear Information System (INIS)

    Shahverdiev, E.M.; Nuriev, R.A.; Hashimov, R.H.; Shore, K.A.

    2004-06-01

    We investigate synchronization between two undirectionally linearly coupled chaotic nonidentical time-delayed systems and show that parameter mismatches are of crucial importance to achieve synchronization. We establish that independent of the relation between the delay time in the coupled systems and the coupling delay time, only retarded synchronization with the coupling delay time is obtained. We show that with parameter mismatch or without it neither complete nor anticipating synchronization occurs. We derive existence and stability conditions for the retarded synchronization manifold. We demonstrate our approach using examples of the Ikeda and Mackey Glass models. Also for the first time we investigate chaos synchronization in time-delayed systems with variable delay time and find both existence and sufficient stability conditions for the retarded synchronization manifold with the coupling-delay lag time. (author)

  19. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  20. Spatial and temporal variability of interhemispheric transport times

    Science.gov (United States)

    Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois

    2018-05-01

    The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.

  1. Predictor Variables for Marathon Race Time in Recreational Female Runners

    OpenAIRE

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Purpose We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Methods Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-varia...

  2. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  3. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  4. The cost of travel time variability: three measures with properties

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2016-01-01

    This paper explores the relationships between three types of measures of the cost of travel time variability: measures based on scheduling preferences and implicit departure time choice, Bernoulli type measures based on a univariate function of travel time, and mean-dispersion measures. We...

  5. Variability of Travel Times on New Jersey Highways

    Science.gov (United States)

    2011-06-01

    This report presents the results of a link and path travel time study conducted on selected New Jersey (NJ) highways to produce estimates of the corresponding variability of travel time (VTT) by departure time of the day and days of the week. The tra...

  6. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  7. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  8. Combination of a higher-tier flow-through system and population modeling to assess the effects of time-variable exposure of isoproturon on the green algae Desmodesmus subspicatus and Pseudokirchneriella subcapitata.

    Science.gov (United States)

    Weber, Denis; Schaefer, Dieter; Dorgerloh, Michael; Bruns, Eric; Goerlitz, Gerhard; Hammel, Klaus; Preuss, Thomas G; Ratte, Hans Toni

    2012-04-01

    A flow-through system was developed to investigate the effects of time-variable exposure of pesticides on algae. A recently developed algae population model was used for simulations supported and verified by laboratory experiments. Flow-through studies with Desmodesmus subspicatus and Pseudokirchneriella subcapitata under time-variable exposure to isoproturon were performed, in which the exposure patterns were based on the results of FOrum for Co-ordination of pesticide fate models and their USe (FOCUS) model calculations for typical exposure situations via runoff or drain flow. Different types of pulsed exposure events were realized, including a whole range of repeated pulsed and steep peaks as well as periods of constant exposure. Both species recovered quickly in terms of growth from short-term exposure and according to substance dissipation from the system. Even at a peak 10 times the maximum predicted environmental concentration of isoproturon, only transient effects occurred on algae populations. No modified sensitivity or reduced growth was observed after repeated exposure. Model predictions of algal growth in the flow-through tests agreed well with the experimental data. The experimental boundary conditions and the physiological properties of the algae were used as the only model input. No calibration or parameter fitting was necessary. The combination of the flow-through experiments with the algae population model was revealed to be a powerful tool for the assessment of pulsed exposure on algae. It allowed investigating the growth reduction and recovery potential of algae after complex exposure, which is not possible with standard laboratory experiments alone. The results of the combined approach confirm the beneficial use of population models as supporting tools in higher-tier risk assessments of pesticides. Copyright © 2012 SETAC.

  9. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  10. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  11. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  12. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  13. Variability in reaction time performance of younger and older adults.

    Science.gov (United States)

    Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A

    2002-03-01

    Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.

  14. Surgeon and type of anesthesia predict variability in surgical procedure times.

    Science.gov (United States)

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated

  15. Predicting travel time variability for cost-benefit analysis

    NARCIS (Netherlands)

    Peer, S.; Koopmans, C.; Verhoef, E.T.

    2010-01-01

    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways.

  16. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  17. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  18. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  19. Drag coefficient Variability and Thermospheric models

    Science.gov (United States)

    Moe, Kenneth

    Satellite drag coefficients depend upon a variety of factors: The shape of the satellite, its altitude, the eccentricity of its orbit, the temperature and mean molecular mass of the ambient atmosphere, and the time in the sunspot cycle. At altitudes where the mean free path of the atmospheric molecules is large compared to the dimensions of the satellite, the drag coefficients can be determined from the theory of free-molecule flow. The dependence on altitude is caused by the concentration of atomic oxygen which plays an important role by its ability to adsorb on the satellite surface and thereby affect the energy loss of molecules striking the surface. The eccentricity of the orbit determines the satellite velocity at perigee, and therefore the energy of the incident molecules relative to the energy of adsorption of atomic oxygen atoms on the surface. The temperature of the ambient atmosphere determines the extent to which the random thermal motion of the molecules influences the momentum transfer to the satellite. The time in the sunspot cycle affects the ambient temperature as well as the concentration of atomic oxygen at a particular altitude. Tables and graphs will be used to illustrate the variability of drag coefficients. Before there were any measurements of gas-surface interactions in orbit, Izakov and Cook independently made an excellent estimate that the drag coefficient of satellites of compact shape would be 2.2. That numerical value, independent of altitude, was used by Jacchia to construct his model from the early measurements of satellite drag. Consequently, there is an altitude dependent bias in the model. From the sparce orbital experiments that have been done, we know that the molecules which strike satellite surfaces rebound in a diffuse angular distribution with an energy loss given by the energy accommodation coefficient. As more evidence accumulates on the energy loss, more realistic drag coefficients are being calculated. These improved drag

  20. Analysis of time variable gravity data over Africa

    International Nuclear Information System (INIS)

    Barletta, Valentina R.; Aoudia, Abdelkarim

    2010-01-01

    phenomena, climate and even to human activities. The time analysis approach proposed in this work is used to discriminate signals from different possible geographical sources mostly in the low latitude regions, where hydrology is strongest, as for example Africa. By applying our approach to hydrological models we can show similarities and differences with GRACE data. The similarities lie almost all in the geographical signatures of the periodic component even if there are differences in amplitudes and phase. While differences in the non-periodic component can be explained with the presence of other phenomena, the differences in the periodic component can be explained with missing groundwater or other defects in the hydrological models. In this way we tested the adequacy of some hydrological models commonly combined with gravity data to retrieve solid Earth geophysical signatures. Thus we show that analysis of time variable gravity data over Africa strongly requires a correct and reliable hydrological model. (author)

  1. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  2. Space and time evolution of two nonlinearly coupled variables

    International Nuclear Information System (INIS)

    Obayashi, H.; Totsuji, H.; Wilhelmsson, H.

    1976-12-01

    The system of two coupled linear differential equations are studied assuming that the coupling terms are proportional to the product of the dependent variables, representing e.g. intensities or populations. It is furthermore assumed that these variables experience different linear dissipation or growth. The derivations account for space as well as time dependence of the variables. It is found that certain particular solutions can be obtained to this system, whereas a full solution in space and time as an initial value problem is outside the scope of the present paper. The system has a nonlinear equilibrium solution for which the nonlinear coupling terms balance the terms of linear dissipation. The case of space and time evolution of a small perturbation of the nonlinear equilibrium state, given the initial one-dimensional spatial distribution of the perturbation, is also considered in some detail. (auth.)

  3. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  4. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  5. Variability of gastric emptying time using standardized radiolabeled meals

    International Nuclear Information System (INIS)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 μCi) and 150 g of orange juice containing In-111 DTPA (100 μCi) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences

  6. Variability of gastric emptying time using standardized radiolabeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 ..mu..Ci) and 150 g of orange juice containing In-111 DTPA (100 ..mu..Ci) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences.

  7. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...... and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work....

  8. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    Science.gov (United States)

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  9. Predictor variables for marathon race time in recreational female runners.

    Science.gov (United States)

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-06-01

    We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-variate analysis in 29 female runners. The marathoners completed the marathon distance within 251 (26) min, running at a speed of 10.2 (1.1) km/h. Body mass (r=0.37), body mass index (r=0.46), the circumferences of thigh (r=0.51) and calf (r=0.41), the skin-fold thicknesses of front thigh (r=0.38) and of medial calf (r=0.40), the sum of eight skin-folds (r=0.44) and body fat percentage (r=0.41) were related to marathon race time. For the variables of training, maximal distance ran per week (r=- 0.38), number of running training sessions per week (r=- 0.46) and the speed of the training sessions (r= - 0.60) were related to marathon race time. In the multi-variate analysis, the circumference of calf (P=0.02) and the speed of the training sessions (P=0.0014) were related to marathon race time. Marathon race time might be partially (r(2)=0.50) predicted by the following equation: Race time (min)=184.4 + 5.0 x (circumference calf, cm) -11.9 x (speed in running during training, km/h) for recreational female marathoners. Variables of both anthropometry and training were related to marathon race time in recreational female marathoners and cannot be reduced to one single predictor variable. For practical applications, a low circumference of calf and a high running speed in training are associated with a fast marathon race time in recreational female runners.

  10. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modelling urban travel times

    NARCIS (Netherlands)

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  12. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  13. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  14. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  15. Increased timing variability in schizophrenia and bipolar disorder.

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    Full Text Available Theoretical and empirical evidence suggests that impaired time perception and the neural circuitry underlying internal timing mechanisms may contribute to severe psychiatric disorders, including psychotic and mood disorders. The degree to which alterations in temporal perceptions reflect deficits that exist across psychosis-related phenotypes and the extent to which mood symptoms contribute to these deficits is currently unknown. In addition, compared to schizophrenia, where timing deficits have been more extensively investigated, sub-second timing has been studied relatively infrequently in bipolar disorder. The present study compared sub-second duration estimates of schizophrenia (SZ, schizoaffective disorder (SA, non-psychotic bipolar disorder (BDNP, bipolar disorder with psychotic features (BDP, and healthy non-psychiatric controls (HC on a well-established time perception task using sub-second durations. Participants included 66 SZ, 37 BDNP, 34 BDP, 31 SA, and 73 HC who participated in a temporal bisection task that required temporal judgements about auditory durations ranging from 300 to 600 milliseconds. Timing variability was significantly higher in SZ, BDP, and BDNP groups compared to healthy controls. The bisection point did not differ across groups. These findings suggest that both psychotic and mood symptoms may be associated with disruptions in internal timing mechanisms. Yet unexpected findings emerged. Specifically, the BDNP group had significantly increased variability compared to controls, but the SA group did not. In addition, these deficits appeared to exist independent of current symptom status. The absence of between group differences in bisection point suggests that increased variability in the SZ and bipolar disorder groups are due to alterations in perceptual timing in the sub-second range, possibly mediated by the cerebellum, rather than cognitive deficits.

  16. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  17. Dynamics of macroeconomic and financial variables in different time horizons

    OpenAIRE

    Kim Karlsson, Hyunjoo

    2012-01-01

    This dissertation consists of an introductory chapter and four papers dealing with financial issues of open economies, which can be in two broad categorizations: 1) exchange rate movements and 2) stock market interdependence. The first paper covers how the exchange rate changes affect the prices of internationally traded goods. With the variables (the price of exports in exporters’ currency and the exchange rate, both of which are in logarithmic form) being cointegrated, a model with both lon...

  18. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  19. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  20. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  1. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  2. The new Toyota variable valve timing and lift system

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, K.; Fuwa, N.; Yoshihara, Y. [Toyota Motor Corporation (Japan); Hori, K. [Toyota Boshoku Corporation (Japan)

    2007-07-01

    A continuously variable valve timing (duration and phase) and lift system was developed. This system was applied to the valvetrain of a new 2.0L L4 engine (3ZRFAE) for the Japanese market. The system has rocker arms, which allow continuously variable timing and lift, situated between a conventional roller-rocker arm and the camshaft, an electromotor actuator to drive it and a phase mechanism for intake and exhaust camshafts (Dual VVT-i). The rocking center of the rocker arm is stationary, and the axial linear motion of a helical spline changes the initial phase of the rocker arm which varies the timing and lift. The linear motion mechanism uses an original planetary roller screw and is driven by a brushless motor with a built-in electric control unit. Since the rocking center and the linear motion helical spline center coincide, a compact cylinder head design was possible, and the cylinder head is a common design with a conventional engine. Since the ECU controls intake valve duration and timing, a fuel economy gain of maximum 10% (depending on driving condition) is obtained by reducing light to medium load pumping losses. Also intake efficiency was maximized throughout the speed range, resulting in a power gain of 10%. Further, HC emissions were reduced due to increased air speed at low valve lift. (orig.)

  3. Quadratic time dependent Hamiltonians and separation of variables

    International Nuclear Information System (INIS)

    Anzaldo-Meneses, A.

    2017-01-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green’s function is obtained and a comparison with the classical Hamilton–Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei–Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü–Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems. - Highlights: • Exact unitary transformation reducing time dependent quadratic quantum Hamiltonian to zero. • New separation of variables method and simultaneous uncoupling of modes. • Explicit examples of transformations for one to four dimensional problems. • New general evolution equation for quadratic form in the action, respectively Green’s function.

  4. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  5. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    Science.gov (United States)

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  6. Time and space variability of spectral estimates of atmospheric pressure

    Science.gov (United States)

    Canavero, Flavio G.; Einaudi, Franco

    1987-01-01

    The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.

  7. Quadratic time dependent Hamiltonians and separation of variables

    Science.gov (United States)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  8. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  9. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  10. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios, E-mail: junhankim@email.arizona.edu [Department of Astronomy and Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  11. How to get rid of W: a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, H.; Oud, J.

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  12. How to get rid of W : a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, Henk; Oud, Johan

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  13. A study of applying variable valve timing to highly rated diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Stone, C R; Leonard, H J [comps.; Brunel Univ., Uxbridge (United Kingdom); Charlton, S J [comp.; Bath Univ. (United Kingdom)

    1992-10-01

    The main objective of the research was to use Simulation Program for Internal Combustion Engines (SPICE) to quantify the potential offered by Variable Valve Timing (VVT) in improving engine performance. A model has been constructed of a particular engine using SPICE. The model has been validated with experimental data, and it has been shown that accurate predictions are made when the valve timing is changed. (author)

  14. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  15. Generating k-independent variables in constant time

    DEFF Research Database (Denmark)

    Christiani, Tobias Lybecker; Pagh, Rasmus

    2014-01-01

    The generation of pseudorandom elements over finite fields is fundamental to the time, space and randomness complexity of randomized algorithms and data structures. We consider the problem of generating k-independent random values over a finite field F in a word RAM model equipped with constant...

  16. Stability Criteria for Differential Equations with Variable Time Delays

    Science.gov (United States)

    Schley, D.; Shail, R.; Gourley, S. A.

    2002-01-01

    Time delays are an important aspect of mathematical modelling, but often result in highly complicated equations which are difficult to treat analytically. In this paper it is shown how careful application of certain undergraduate tools such as the Method of Steps and the Principle of the Argument can yield significant results. Certain delay…

  17. Environmental versus demographic variability in stochastic predator–prey models

    International Nuclear Information System (INIS)

    Dobramysl, U; Täuber, U C

    2013-01-01

    In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)

  18. Correlates of adolescent sleep time and variability in sleep time: the role of individual and health related characteristics.

    Science.gov (United States)

    Moore, Melisa; Kirchner, H Lester; Drotar, Dennis; Johnson, Nathan; Rosen, Carol; Redline, Susan

    2011-03-01

    Adolescents are predisposed to short sleep duration and irregular sleep patterns due to certain host characteristics (e.g., age, pubertal status, gender, ethnicity, socioeconomic class, and neighborhood distress) and health-related variables (e.g., ADHD, asthma, birth weight, and BMI). The aim of the current study was to investigate the relationship between such variables and actigraphic measures of sleep duration and variability. Cross-sectional study of 247 adolescents (48.5% female, 54.3% ethnic minority, mean age of 13.7years) involved in a larger community-based cohort study. Significant univariate predictors of sleep duration included gender, minority ethnicity, neighborhood distress, parent income, and BMI. In multivariate models, gender, minority status, and BMI were significantly associated with sleep duration (all pminority adolescents, and those of a lower BMI obtaining more sleep. Univariate models demonstrated that age, minority ethnicity, neighborhood distress, parent education, parent income, pubertal status, and BMI were significantly related to variability in total sleep time. In the multivariate model, age, minority status, and BMI were significantly related to variability in total sleep time (all pminority adolescents, and those of a lower BMI obtaining more regular sleep. These data show differences in sleep patterns in population sub-groups of adolescents which may be important in understanding pediatric health risk profiles. Sub-groups that may particularly benefit from interventions aimed at improving sleep patterns include boys, overweight, and minority adolescents. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  20. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  1. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  2. Time-scales of stellar rotational variability and starspot diagnostics

    Science.gov (United States)

    Arkhypov, Oleksiy V.; Khodachenko, Maxim L.; Lammer, Helmut; Güdel, Manuel; Lüftinger, Teresa; Johnstone, Colin P.

    2018-01-01

    The difference in stability of starspot distribution on the global and hemispherical scales is studied in the rotational spot variability of 1998 main-sequence stars observed by Kepler mission. It is found that the largest patterns are much more stable than smaller ones for cool, slow rotators, whereas the difference is less pronounced for hotter stars and/or faster rotators. This distinction is interpreted in terms of two mechanisms: (1) the diffusive decay of long-living spots in activity complexes of stars with saturated magnetic dynamos, and (2) the spot emergence, which is modulated by gigantic turbulent flows in convection zones of stars with a weaker magnetism. This opens a way for investigation of stellar deep convection, which is yet inaccessible for asteroseismology. Moreover, a subdiffusion in stellar photospheres was revealed from observations for the first time. A diagnostic diagram was proposed that allows differentiation and selection of stars for more detailed studies of these phenomena.

  3. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  4. Preferences for travel time variability – A study of Danish car drivers

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Rich, Jeppe

    Travel time variability (TTV) is a measure of the extent of unpredictability in travel times. It is generally accepted that TTV has a negative effect on travellers’ wellbeing and overall utility of travelling, and valuation of variability is an important issue in transport demand modelling...... preferences, to exclude non-traders, and to avoid complicated issues related to scheduled public transport services. The survey uses customised Internet questionnaires, containing a series of questions related to the traveller’s most recent morning trip to work, e.g.: • Travel time experienced on this day......, • Number of stops along the way, their duration, and whether these stops involved restrictions on time of day, • Restrictions regarding departure time from home or arrival time at work, • How often such a trip was made within the last month and the range of experienced travel times, • What the traveller...

  5. Time dependent analysis of assay comparability: a novel approach to understand intra- and inter-site variability over time

    Science.gov (United States)

    Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire

    2015-09-01

    We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.

  6. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  7. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  8. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  9. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  10. Perfectionistic Cognitions: Stability, Variability, and Changes Over Time.

    Science.gov (United States)

    Prestele, Elisabeth; Altstötter-Gleich, Christine

    2018-02-01

    The construct of perfectionistic cognitions is defined as a state-like construct resulting from a perfectionistic self-schema and activated by specific situational demands. Only a few studies have investigated whether and how perfectionistic cognitions change across different situations and whether they reflect stable between-person differences or also within-person variations over time. We conducted 2 studies to investigate the variability and stability of 3 dimensions of perfectionistic cognitions while situational demands changed (Study 1) and on a daily level during a highly demanding period of time (Study 2). The results of both studies revealed that stable between-person differences accounted for the largest proportion of variance in the dimensions of perfectionistic cognitions and that these differences were validly associated with between-person differences in affect. The frequency of perfectionistic cognitions increased during students' first semester at university, and these average within-person changes were different for the 3 dimensions of perfectionistic cognitions (Study 1). In addition, there were between-person differences in the within-person changes that were validly associated with concurrent changes in closely related constructs (unpleasant mood and tense arousal). Within-person variations in perfectionistic cognitions were also validly associated with variations in unpleasant mood and tense arousal from day to day (Study 2).

  11. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  12. An Atmospheric Variability Model for Venus Aerobraking Missions

    Science.gov (United States)

    Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

    2013-01-01

    Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

  13. Interdecadal variability in a global coupled model

    International Nuclear Information System (INIS)

    Storch, J.S. von.

    1994-01-01

    Interdecadal variations are studied in a 325-year simulation performed by a coupled atmosphere - ocean general circulation model. The patterns obtained in this study may be considered as characteristic patterns for interdecadal variations. 1. The atmosphere: Interdecadal variations have no preferred time scales, but reveal well-organized spatial structures. They appear as two modes, one is related with variations of the tropical easterlies and the other with the Southern Hemisphere westerlies. Both have red spectra. The amplitude of the associated wind anomalies is largest in the upper troposphere. The associated temperature anomalies are in thermal-wind balance with the zonal winds and are out-of-phase between the troposphere and the lower stratosphere. 2. The Pacific Ocean: The dominant mode in the Pacific appears to be wind-driven in the midlatitudes and is related to air-sea interaction processes during one stage of the oscillation in the tropics. Anomalies of this mode propagate westward in the tropics and the northward (southwestward) in the North (South) Pacific on a time scale of about 10 to 20 years. (orig.)

  14. X-ray spectra and time variability of active galactic nuclei

    International Nuclear Information System (INIS)

    Mushotzky, R.F.

    1984-02-01

    The X-ray spectra of broad line active galactic nuclei (AGN) of all types (Seyfert I's, NELG's, broadline radio galaxies) are well fit by a power law in the .5 to 100 keV band of man energy slope alpha .68 + or - .15. There is, as yet, no strong evidence for time variability of this slope in a given object. The constraints that this places on simple models of the central energy source are discussed. BL Lac objects have quite different X-ray spectral properties and show pronounced X-ray spectral variability. On time scales longer than 12 hours most radio quiet AGN do not show strong, delta I/I .5, variability. The probability of variability of these AGN seems to be inversely related to their luminosity. However characteristics timescales for variability have not been measured for many objects. This general lack of variability may imply that most AGN are well below the Eddington limit. Radio bright AGN tend to be more variable than radio quiet AGN on long, tau approx 6 month, timescales

  15. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  16. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  17. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  18. Protecting chips against hold time violations due to variability

    CERN Document Server

    Neuberger, Gustavo; Reis, Ricardo

    2013-01-01

    With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design.The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Becaus

  19. Impedance models in time domain

    NARCIS (Netherlands)

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  20. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  1. Predictor variable resolution governs modeled soil types

    Science.gov (United States)

    Soil mapping identifies different soil types by compressing a unique suite of spatial patterns and processes across multiple spatial scales. It can be quite difficult to quantify spatial patterns of soil properties with remotely sensed predictor variables. More specifically, matching the right scale...

  2. Modeling Coast Redwood Variable Retention Management Regimes

    Science.gov (United States)

    John-Pascal Berrill; Kevin O' Hara

    2007-01-01

    Variable retention is a flexible silvicultural system that provides forest managers with an alternative to clearcutting. While much of the standing volume is removed in one harvesting operation, residual stems are retained to provide structural complexity and wildlife habitat functions, or to accrue volume before removal during subsequent stand entries. The residual...

  3. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    Science.gov (United States)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER

  4. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  5. Viscous dark energy models with variable G and Λ

    International Nuclear Information System (INIS)

    Arbab, Arbab I.

    2008-01-01

    We consider a cosmological model with bulk viscosity η and variable cosmological A ∝ ρ -α , alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds due to the presence of bulk viscosity and dark energy without requiring the equation of state p=-ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation. (author)

  6. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  7. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  8. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  9. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  10. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  11. A fire management simulation model using stochastic arrival times

    Science.gov (United States)

    Eric L. Smith

    1987-01-01

    Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...

  12. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Science.gov (United States)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  13. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  14. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  15. Resting heart rate variability is associated with ex-Gaussian metrics of intra-individual reaction time variability.

    Science.gov (United States)

    Spangler, Derek P; Williams, DeWayne P; Speller, Lassiter F; Brooks, Justin R; Thayer, Julian F

    2018-03-01

    The relationships between vagally mediated heart rate variability (vmHRV) and the cognitive mechanisms underlying performance can be elucidated with ex-Gaussian modeling-an approach that quantifies two different forms of intra-individual variability (IIV) in reaction time (RT). To this end, the current study examined relations of resting vmHRV to whole-distribution and ex-Gaussian IIV. Subjects (N = 83) completed a 5-minute baseline while vmHRV (root mean square of successive differences; RMSSD) was measured. Ex-Gaussian (sigma, tau) and whole-distribution (standard deviation) estimates of IIV were derived from reaction times on a Stroop task. Resting vmHRV was found to be inversely related to tau (exponential IIV) but not to sigma (Gaussian IIV) or the whole-distribution standard deviation of RTs. Findings suggest that individuals with high vmHRV can better prevent attentional lapses but not difficulties with motor control. These findings inform the differential relationships of cardiac vagal control to the cognitive processes underlying human performance. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  17. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  18. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  19. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  20. Numerical Solution of the Time-Dependent Navier–Stokes Equation for Variable Density–Variable Viscosity. Part I

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Xin, H.; Neytcheva, M.

    2015-01-01

    Roč. 20, č. 2 (2015), s. 232-260 ISSN 1392-6292 Institutional support: RVO:68145535 Keywords : variable density * phase-field model * Navier-Stokes equations * preconditioning * variable viscosity Subject RIV: BA - General Mathematics Impact factor: 0.468, year: 2015 http://www.tandfonline.com/doi/abs/10.3846/13926292.2015.1021395

  1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  2. Modeling dynamic effects of promotion on interpurchase times

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractIn this paper we put forward a duration model to analyze the dynamic effects of marketing-mix variables on interpurchase times. We extend the accelerated failure-time model with an autoregressive structure. An important feature of our model is that it allows for different long-run and

  3. DYNAMIC RESPONSE OF THICK PLATES ON TWO PARAMETER ELASTIC FOUNDATION UNDER TIME VARIABLE LOADING

    OpenAIRE

    Ozgan, Korhan; Daloglu, Ayse T.

    2014-01-01

    In this paper, behavior of foundation plates with transverse shear deformation under time variable loading is presented using modified Vlasov foundation model. Finite element formulation of thick plates on elastic foundation is derived by using an 8-noded finite element based on Mindlin plate theory. Selective reduced integration technique is used to avoid shear locking problem which arises when smaller plate thickness is considered for the evaluation of the stiffness matrices. After comparis...

  4. Leisure time physical activity, screen time, social background, and environmental variables in adolescents.

    Science.gov (United States)

    Mota, Jorge; Gomes, Helena; Almeida, Mariana; Ribeiro, José Carlos; Santos, Maria Paula

    2007-08-01

    This study analyzes the relationships between leisure time physical activity (LTPA), sedentary behaviors, socioeconomic status, and perceived environmental variables. The sample comprised 815 girls and 746 boys. In girls, non-LTPA participants reported significantly more screen time. Girls with safety concerns were more likely to be in the non-LTPA group (OR = 0.60) and those who agreed with the importance of aesthetics were more likely to be in the active-LTPA group (OR = 1.59). In girls, an increase of 1 hr of TV watching was a significant predictor of non-LTPA (OR = 0.38). LTPA for girls, but not for boys, seems to be influenced by certain modifiable factors of the built environment, as well as by time watching TV.

  5. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  6. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  7. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  8. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

    International Nuclear Information System (INIS)

    Millings, M.; Denham, M.; Looney, B.

    2012-01-01

    chemistry and variability included: (1) the nature or chemistry of the waste streams, (2) the open system of the basins, and (3) duration of discharge of the waste stream types. Mixing models of the archetype waste streams indicated that the overall basin system would likely remain acidic much of the time. Only an extended periods of predominantly alkaline waste discharge (e.g., >70% alkaline waste) would dramatically alter the average pH of wastewater entering the basins. Short term and long term variability were evaluated by performing multiple stepwise modeling runs to calculate the oscillation of bulk chemistry in the basins in response to short term variations in waste stream chemistry. Short term (1/2 month and 1 month) oscillations in the waste stream types only affected the chemistry in Basin 1; little variation was observed in Basin 2 and 3. As the largest basin, Basin 3 is considered the primary source to the groundwater. Modeling showed that the fluctuation in chemistry of the waste streams is not directly representative of the source term to the groundwater (i.e. Basin 3). The sequence of receiving basins and the large volume of water in Basin 3 'smooth' or nullify the short term variability in waste stream composition. As part of this study, a technically-based 'charge-balanced' nominal source term chemistry was developed for Basin 3 for a narrow range of pH (2.7 to 3.4). An example is also provided of how these data could be used to quantify uncertainty over the long term variations in waste stream chemistry and hence, Basin 3 chemistry.

  9. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  10. modelling relationship between rainfall variability and yields

    African Journals Online (AJOL)

    , S. and ... factors to rice yield. Adebayo and Adebayo (1997) developed double log multiple regression model to predict rice yield in Adamawa State, Nigeria. The general form of .... the second are the crop yield/values for millet and sorghum ...

  11. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  12. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    Science.gov (United States)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal

  13. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  14. Unified models of interactions with gauge-invariant variables

    International Nuclear Information System (INIS)

    Zet, Gheorghe

    2000-01-01

    A model of gauge theory is formulated in terms of gauge-invariant variables over a 4-dimensional space-time. Namely, we define a metric tensor g μν ( μ , ν = 0,1,2,3) starting with the components F μν a and F μν a tilde of the tensor associated to the Yang-Mills fields and its dual: g μν = 1/(3Δ 1/3 ) (ε abc F μα a F αβ b tilde F βν c ). Here Δ is a scale factor which can be chosen of a convenient form so that the theory may be self-dual or not. The components g μν are interpreted as new gauge-invariant variables. The model is applied to the case when the gauge group is SU(2). For the space-time we choose two different manifolds: (i) the space-time is R x S 3 , where R is the real line and S 3 is the three-dimensional sphere; (ii) the space-time is endowed with axial symmetry. We calculate the components g μν of the new metric for the two cases in terms of SU(2) gauge potentials. Imposing the supplementary condition that the new metric coincides with the initial metric of the space-time, we obtain the field equations (of the first order in derivatives) for the gauge fields. In addition, we determine the scale factor Δ which is introduced in the definition of g μν to ensure the property of self-duality for our SU(2) gauge theory, namely, 1/(2√g)(ε αβστ g μα g νβ F στ a = F μν a , g = det (g μν ). In the case (i) we show that the space-time R x S 3 is not compatible with a self-dual SU(2) gauge theory, but in the case (ii) the condition of self-duality is satisfied. The model developed in our work can be considered as a possible way to unification of general relativity and Yang-Mills theories. This means that the gauge theory can be formulated in the close analogy with the general relativity, i.e. the Yang-Mills equations are equivalent to Einstein equations with the right-hand side of a simple form. (authors)

  15. On time-frequence analysis of heart rate variability

    NARCIS (Netherlands)

    H.G. van Steenis (Hugo)

    2002-01-01

    textabstractThe aim of this research is to develop a time-frequency method suitable to study HRV in greater detail. The following approach was used: • two known time-frequency representations were applied to HRV to understand its advantages and disadvantages in describing HRV in frequency and in

  16. A geometric model for magnetizable bodies with internal variables

    Directory of Open Access Journals (Sweden)

    Restuccia, L

    2005-11-01

    Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

  17. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  18. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  19. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  20. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  1. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  2. Handling Time-dependent Variables : Antibiotics and Antibiotic Resistance

    NARCIS (Netherlands)

    Munoz-Price, L. Silvia; Frencken, Jos F.; Tarima, Sergey; Bonten, Marc

    2016-01-01

    Elucidating quantitative associations between antibiotic exposure and antibiotic resistance development is important. In the absence of randomized trials, observational studies are the next best alternative to derive such estimates. Yet, as antibiotics are prescribed for varying time periods,

  3. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  4. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  5. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

  6. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  7. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  8. Statistical Learning and Adaptive Decision-Making Underlie Human Response Time Variability in Inhibitory Control

    Directory of Open Access Journals (Sweden)

    Ning eMa

    2015-08-01

    Full Text Available Response time (RT is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task, in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop, and stop-signal onset time, SSD (stop-signal delay, with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop and SSD. The human behavioral data (n=20 bear out this prediction, showing P(stop and SSD both to be significant, independent predictors of RT, with P(stop being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  9. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control.

    Science.gov (United States)

    Ma, Ning; Yu, Angela J

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  10. Real-time laser cladding control with variable spot size

    Science.gov (United States)

    Arias, J. L.; Montealegre, M. A.; Vidal, F.; Rodríguez, J.; Mann, S.; Abels, P.; Motmans, F.

    2014-03-01

    Laser cladding processing has been used in different industries to improve the surface properties or to reconstruct damaged pieces. In order to cover areas considerably larger than the diameter of the laser beam, successive partially overlapping tracks are deposited. With no control over the process variables this conduces to an increase of the temperature, which could decrease mechanical properties of the laser cladded material. Commonly, the process is monitored and controlled by a PC using cameras, but this control suffers from a lack of speed caused by the image processing step. The aim of this work is to design and develop a FPGA-based laser cladding control system. This system is intended to modify the laser beam power according to the melt pool width, which is measured using a CMOS camera. All the control and monitoring tasks are carried out by a FPGA, taking advantage of its abundance of resources and speed of operation. The robustness of the image processing algorithm is assessed, as well as the control system performance. Laser power is decreased as substrate temperature increases, thus maintaining a constant clad width. This FPGA-based control system is integrated in an adaptive laser cladding system, which also includes an adaptive optical system that will control the laser focus distance on the fly. The whole system will constitute an efficient instrument for part repair with complex geometries and coating selective surfaces. This will be a significant step forward into the total industrial implementation of an automated industrial laser cladding process.

  11. Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Witt, Carsten

    2014-01-01

    Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target...

  12. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  13. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    Science.gov (United States)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  14. Transient modelling of a natural circulation loop under variable pressure

    International Nuclear Information System (INIS)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian; Instituto de Engenharia Nuclear

    2017-01-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  15. Transient modelling of a natural circulation loop under variable pressure

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

    2017-07-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  16. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

    Science.gov (United States)

    Schneider, Adam; Flanner, Mark; Perket, Justin

    2018-02-01

    Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

  17. Start time variability and predictability in railroad train and engine freight and passenger service employees.

    Science.gov (United States)

    2014-04-01

    Start time variability in work schedules is often hypothesized to be a cause of railroad employee fatigue because unpredictable work start times prevent employees from planning sleep and personal activities. This report examines work start time diffe...

  18. Holocene Climate Variability on the Centennial and Millennial Time Scale

    Directory of Open Access Journals (Sweden)

    Eun Hee Lee

    2014-12-01

    Full Text Available There have been many suggestions and much debate about climate variability during the Holocene. However, their complex forcing factors and mechanisms have not yet been clearly identified. In this paper, we have examined the Holocene climate cycles and features based on the wavelet analyses of 14C, 10Be, and 18O records. The wavelet results of the 14C and 10Be data show that the cycles of ~2180-2310, ~970, ~500-520, ~350-360, and ~210-220 years are dominant, and the ~1720 and ~1500 year cycles are relatively weak and subdominant. In particular, the ~2180-2310 year periodicity corresponding to the Hallstatt cycle is constantly significant throughout the Holocene, while the ~970 year cycle corresponding to the Eddy cycle is mainly prominent in the early half of the Holocene. In addition, distinctive signals of the ~210-220 year period corresponding to the de Vries cycle appear recurrently in the wavelet distribution of 14C and 10Be, which coincide with the grand solar minima periods. These de Vries cycle events occurred every ~2270 years on average, implying a connection with the Hallstatt cycle. In contrast, the wavelet results of 18O data show that the cycles of ~1900-2000, ~900-1000, and ~550-560 years are dominant, while the ~2750 and ~2500 year cycles are subdominant. The periods of ~2750, ~2500, and ~1900 years being derived from the 18O records of NGRIP, GRIP and GISP2 ice cores, respectively, are rather longer or shorter than the Hallstatt cycle derived from the 14C and 10Be records. The records of these three sites all show the ~900-1000 year periodicity corresponding to the Eddy cycle in the early half of the Holocene.

  19. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System.

    Science.gov (United States)

    Gruppen, Larry D; Ten Cate, Olle; Lingard, Lorelei A; Teunissen, Pim W; Kogan, Jennifer R

    2018-03-01

    Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.

  20. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; Heijden, van der M.C.

    1996-01-01

    In many multi-echelon inventory systems the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process which generates such lead times is usually not

  1. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; van der Heijden, M.C.

    1997-01-01

    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well

  2. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  3. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  4. Interannual modes of variability of Southern Hemisphere atmospheric circulation in CMIP3 models

    International Nuclear Information System (INIS)

    Grainger, S; Frederiksen, C S; Zheng, X

    2010-01-01

    The atmospheric circulation acts as a bridge between large-scale sources of climate variability, and climate variability on regional scales. Here a statistical method is applied to monthly mean Southern Hemisphere 500hPa geopotential height to separate the interannual variability of the seasonal mean into intraseasonal and slowly varying (time scales of a season or longer) components. Intraseasonal and slow modes of variability are estimated from realisations of models from the Coupled Model Intercomparison Project Phase 3 (CMIP3) twentieth century coupled climate simulation (20c3m) and are evaluated against those estimated from reanalysis data. The intraseasonal modes of variability are generally well reproduced across all CMIP3 20c3m models for both Southern Hemisphere summer and winter. The slow modes are in general less well reproduced than the intraseasonal modes, and there are larger differences between realisations than for the intraseasonal modes. New diagnostics are proposed to evaluate model variability. It is found that differences between realisations from each model are generally less than inter-model differences. Differences between model-mean diagnostics are found. The results obtained are applicable to assessing the reliability of changes in atmospheric circulation variability in CMIP3 models and for their suitability for further studies of regional climate variability.

  5. Modeling Variable Phanerozoic Oxygen Effects on Physiology and Evolution.

    Science.gov (United States)

    Graham, Jeffrey B; Jew, Corey J; Wegner, Nicholas C

    2016-01-01

    Geochemical approximation of Earth's atmospheric O2 level over geologic time prompts hypotheses linking hyper- and hypoxic atmospheres to transformative events in the evolutionary history of the biosphere. Such correlations, however, remain problematic due to the relative imprecision of the timing and scope of oxygen change and the looseness of its overlay on the chronology of key biotic events such as radiations, evolutionary innovation, and extinctions. There are nevertheless general attributions of atmospheric oxygen concentration to key evolutionary changes among groups having a primary dependence upon oxygen diffusion for respiration. These include the occurrence of Devonian hypoxia and the accentuation of air-breathing dependence leading to the origin of vertebrate terrestriality, the occurrence of Carboniferous-Permian hyperoxia and the major radiation of early tetrapods and the origins of insect flight and gigantism, and the Mid-Late Permian oxygen decline accompanying the Permian extinction. However, because of variability between and error within different atmospheric models, there is little basis for postulating correlations outside the Late Paleozoic. Other problems arising in the correlation of paleo-oxygen with significant biological events include tendencies to ignore the role of blood pigment affinity modulation in maintaining homeostasis, the slow rates of O2 change that would have allowed for adaptation, and significant respiratory and circulatory modifications that can and do occur without changes in atmospheric oxygen. The purpose of this paper is thus to refocus thinking about basic questions central to the biological and physiological implications of O2 change over geological time.

  6. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  7. Bidecadal North Atlantic ocean circulation variability controlled by timing of volcanic eruptions.

    Science.gov (United States)

    Swingedouw, Didier; Ortega, Pablo; Mignot, Juliette; Guilyardi, Eric; Masson-Delmotte, Valérie; Butler, Paul G; Khodri, Myriam; Séférian, Roland

    2015-03-30

    While bidecadal climate variability has been evidenced in several North Atlantic paleoclimate records, its drivers remain poorly understood. Here we show that the subset of CMIP5 historical climate simulations that produce such bidecadal variability exhibits a robust synchronization, with a maximum in Atlantic Meridional Overturning Circulation (AMOC) 15 years after the 1963 Agung eruption. The mechanisms at play involve salinity advection from the Arctic and explain the timing of Great Salinity Anomalies observed in the 1970s and the 1990s. Simulations, as well as Greenland and Iceland paleoclimate records, indicate that coherent bidecadal cycles were excited following five Agung-like volcanic eruptions of the last millennium. Climate simulations and a conceptual model reveal that destructive interference caused by the Pinatubo 1991 eruption may have damped the observed decreasing trend of the AMOC in the 2000s. Our results imply a long-lasting climatic impact and predictability following the next Agung-like eruption.

  8. Space-time modeling of soil moisture

    Science.gov (United States)

    Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio

    2017-11-01

    A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.

  9. Variable valve timing in a homogenous charge compression ignition engine

    Science.gov (United States)

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  10. Time-dependent inelastic analysis of metallic media using constitutive relations with state variables

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V; Mukherjee, S [Cornell Univ., Ithaca, N.Y. (USA)

    1977-03-01

    A computational technique in terms of stress, strain and displacement rates is presented for the solution of boundary value problems for metallic structural elements at uniform elevated temperatures subjected to time varying loads. This method can accommodate any number of constitutive relations with state variables recently proposed by other researchers to model the inelastic deformation of metallic media at elevated temperatures. Numerical solutions are obtained for several structural elements subjected to steady loads. The constitutive relations used for these numerical solutions are due to Hart. The solutions are discussed in the context of the computational scheme and Hart's theory.

  11. Interacting ghost dark energy models with variable G and Λ

    Science.gov (United States)

    Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

    2013-12-01

    In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

  12. a modified intervention model for gross domestic product variable

    African Journals Online (AJOL)

    observations on a variable that have been measured at ... assumption that successive values in the data file ... these interventions, one may try to evaluate the effect of ... generalized series by comparing the distinct periods. A ... the process of checking for adequacy of the model based .... As a result, the model's forecast will.

  13. Simple model for crop photosynthesis in terms of weather variables ...

    African Journals Online (AJOL)

    A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

  14. Model for expressing leaf photosynthesis in terms of weather variables

    African Journals Online (AJOL)

    A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

  15. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  16. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    Science.gov (United States)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  17. Modelling the effects of spatial variability on radionuclide migration

    International Nuclear Information System (INIS)

    1998-01-01

    The NEA workshop reflect the present status in national waste management program, specifically in spatial variability and performance assessment of geologic disposal sites for deed repository system the four sessions were: Spatial Variability: Its Definition and Significance to Performance Assessment and Site Characterisation; Experience with the Modelling of Radionuclide Migration in the Presence of Spatial Variability in Various Geological Environments; New Areas for Investigation: Two Personal Views; What is Wanted and What is Feasible: Views and Future Plans in Selected Waste Management Organisations. The 26 papers presented on the four oral sessions and on the poster session have been abstracted and indexed individually for the INIS database. (R.P.)

  18. Variable Neighbourhood Search and Mathematical Programming for Just-in-Time Job-Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sunxin Wang

    2014-01-01

    Full Text Available This paper presents a combination of variable neighbourhood search and mathematical programming to minimize the sum of earliness and tardiness penalty costs of all operations for just-in-time job-shop scheduling problem (JITJSSP. Unlike classical E/T scheduling problem with each job having its earliness or tardiness penalty cost, each operation in this paper has its earliness and tardiness penalties, which are paid if the operation is completed before or after its due date. Our hybrid algorithm combines (i a variable neighbourhood search procedure to explore the huge feasible solution spaces efficiently by alternating the swap and insertion neighbourhood structures and (ii a mathematical programming model to optimize the completion times of the operations for a given solution in each iteration procedure. Additionally, a threshold accepting mechanism is proposed to diversify the local search of variable neighbourhood search. Computational results on the 72 benchmark instances show that our algorithm can obtain the best known solution for 40 problems, and the best known solutions for 33 problems are updated.

  19. Dissecting Time- from Tumor-Related Gene Expression Variability in Bilateral Breast Cancer

    Directory of Open Access Journals (Sweden)

    Maurizio Callari

    2018-01-01

    Full Text Available Metachronous (MBC and synchronous bilateral breast tumors (SBC are mostly distinct primaries, whereas paired primaries and their local recurrences (LRC share a common origin. Intra-pair gene expression variability in MBC, SBC, and LRC derives from time/tumor microenvironment-related and tumor genetic background-related factors and pairs represents an ideal model for trying to dissect tumor-related from microenvironment-related variability. Pairs of tumors derived from women with SBC (n = 18, MBC (n = 11, and LRC (n = 10 undergoing local-regional treatment were profiled for gene expression; similarity between pairs was measured using an intraclass correlation coefficient (ICC computed for each gene and compared using analysis of variance (ANOVA. When considering biologically unselected genes, the highest correlations were found for primaries and paired LRC, and the lowest for MBC pairs. By instead limiting the analysis to the breast cancer intrinsic genes, correlations between primaries and paired LRC were enhanced, while lower similarities were observed for SBC and MBC. Focusing on stromal-related genes, the ICC values decreased for MBC and were significantly different from SBC. These findings indicate that it is possible to dissect intra-pair gene expression variability into components that are associated with genetic origin or with time and microenvironment by using specific gene subsets.

  20. Effects of implementing time-variable postgraduate training programmes on the organization of teaching hospital departments.

    Science.gov (United States)

    van Rossum, Tiuri R; Scheele, Fedde; Sluiter, Henk E; Paternotte, Emma; Heyligers, Ide C

    2018-01-31

    As competency-based education has gained currency in postgraduate medical education, it is acknowledged that trainees, having individual learning curves, acquire the desired competencies at different paces. To accommodate their different learning needs, time-variable curricula have been introduced making training no longer time-bound. This paradigm has many consequences and will, predictably, impact the organization of teaching hospitals. The purpose of this study was to determine the effects of time-variable postgraduate education on the organization of teaching hospital departments. We undertook exploratory case studies into the effects of time-variable training on teaching departments' organization. We held semi-structured interviews with clinical teachers and managers from various hospital departments. The analysis yielded six effects: (1) time-variable training requires flexible and individual planning, (2) learners must be active and engaged, (3) accelerated learning sometimes comes at the expense of clinical expertise, (4) fast-track training for gifted learners jeopardizes the continuity of care, (5) time-variable training demands more of supervisors, and hence, they need protected time for supervision, and (6) hospital boards should support time-variable training. Implementing time-variable education affects various levels within healthcare organizations, including stakeholders not directly involved in medical education. These effects must be considered when implementing time-variable curricula.

  1. The Role of a PMI-Prediction Model in Evaluating Forensic Entomology Experimental Design, the Importance of Covariates, and the Utility of Response Variables for Estimating Time Since Death

    Directory of Open Access Journals (Sweden)

    Jeffrey Wells

    2017-05-01

    Full Text Available The most common forensic entomological application is the estimation of some portion of the time since death, or postmortem interval (PMI. To our knowledge, a PMI estimate is almost never accompanied by an associated probability. Statistical methods are now available for calculating confidence limits for an insect-based prediction of PMI for both succession and development data. In addition to it now being possible to employ these approaches in validation experiments and casework, it is also now possible to use the criterion of prediction performance to guide training experiments, i.e., to modify carrion insect development or succession experiment design in ways likely to improve the performance of PMI predictions using the resulting data. In this paper, we provide examples, derived from our research program on calculating PMI estimate probabilities, of how training data experiment design can influence the performance of a statistical model for PMI prediction.

  2. Time-variable medical education innovation in context

    Directory of Open Access Journals (Sweden)

    Stamy CD

    2018-06-01

    Full Text Available Christopher D Stamy,1 Christine C Schwartz,1 Danielle A Phillips,2 Aparna S Ajjarapu,3 Kristi J Ferguson,4,5 Debra A Schwinn6–8 1University of Iowa Carver College of Medicine, Iowa City, 2Des Moines University Osteopathic Medical Center, Des Moines, 3University of Iowa, 4Office of Consultation & Research in Medical Education, University of Iowa Carver College of Medicine, 5Department of Internal Medicine, 6Department of Anesthesia, 7Department of Biochemistry, 8Department of Pharmacology, University of Iowa Health Care, Iowa City, IA, USA Background: Medical education is undergoing robust curricular reform with several innovative models emerging. In this study, we examined current trends in 3-year Doctor of Medicine (MD education and place these programs in context. Methods: A survey was conducted among Deans of U.S. allopathic medical schools using structured phone interview regarding current availability of a 3-year MD pathway, and/or other variations in curricular innovation, within their institution. Those with 3-year programs answered additional questions. Results: Data from 107 institutions were obtained (75% survey response rate. The most common variation in length of medical education today is the accelerated 3-year pathway. Since 2010, 9 medical schools have introduced parallel 3-year MD programs and another 4 are actively developing such programs. However, the total number of students in 3-year MD tracks remains small (n=199 students, or 0.2% total medical students. Family medicine and general internal medicine are the most common residency programs selected. Benefits of 3-year MD programs generally include reduction in student debt, stability of guaranteed residency positions, and potential for increasing physician numbers in rural/underserved areas. Drawbacks include concern about fatigue/burnout, difficulty in providing guaranteed residency positions, and additional expense in teaching 2 parallel curricula. Four vignettes of

  3. Earth System Data Records of Mass Transport from Time-Variable Gravity Data

    Science.gov (United States)

    Zlotnicki, V.; Talpe, M.; Nerem, R. S.; Landerer, F. W.; Watkins, M. M.

    2014-12-01

    Satellite measurements of time variable gravity have revolutionized the study of Earth, by measuring the ice losses of Greenland, Antarctica and land glaciers, changes in groundwater including unsustainable losses due to extraction of groundwater, the mass and currents of the oceans and their redistribution during El Niño events, among other findings. Satellite measurements of gravity have been made primarily by four techniques: satellite tracking from land stations using either lasers or Doppler radio systems, satellite positioning by GNSS/GPS, satellite to satellite tracking over distances of a few hundred km using microwaves, and through a gravity gradiometer (radar altimeters also measure the gravity field, but over the oceans only). We discuss the challenges in the measurement of gravity by different instruments, especially time-variable gravity. A special concern is how to bridge a possible gap in time between the end of life of the current GRACE satellite pair, launched in 2002, and a future GRACE Follow-On pair to be launched in 2017. One challenge in combining data from different measurement systems consists of their different spatial and temporal resolutions and the different ways in which they alias short time scale signals. Typically satellite measurements of gravity are expressed in spherical harmonic coefficients (although expansions in terms of 'mascons', the masses of small spherical caps, has certain advantages). Taking advantage of correlations among spherical harmonic coefficients described by empirical orthogonal functions and derived from GRACE data it is possible to localize the otherwise coarse spatial resolution of the laser and Doppler derived gravity models. This presentation discusses the issues facing a climate data record of time variable mass flux using these different data sources, including its validation.

  4. Modeling and Simulation of Variable Mass, Flexible Structures

    Science.gov (United States)

    Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.

    2009-01-01

    The advent of the new Ares I launch vehicle has highlighted the need for advanced dynamic analysis tools for variable mass, flexible structures. This system is composed of interconnected flexible stages or components undergoing rapid mass depletion through the consumption of solid or liquid propellant. In addition to large rigid body configuration changes, the system simultaneously experiences elastic deformations. In most applications, the elastic deformations are compatible with linear strain-displacement relationships and are typically modeled using the assumed modes technique. The deformation of the system is approximated through the linear combination of the products of spatial shape functions and generalized time coordinates. Spatial shape functions are traditionally composed of normal mode shapes of the system or even constraint modes and static deformations derived from finite element models of the system. Equations of motion for systems undergoing coupled large rigid body motion and elastic deformation have previously been derived through a number of techniques [1]. However, in these derivations, the mode shapes or spatial shape functions of the system components were considered constant. But with the Ares I vehicle, the structural characteristics of the system are changing with the mass of the system. Previous approaches to solving this problem involve periodic updates to the spatial shape functions or interpolation between shape functions based on system mass or elapsed mission time. These solutions often introduce misleading or even unstable numerical transients into the system. Plus, interpolation on a shape function is not intuitive. This paper presents an approach in which the shape functions are held constant and operate on the changing mass and stiffness matrices of the vehicle components. Each vehicle stage or component finite element model is broken into dry structure and propellant models. A library of propellant models is used to describe the

  5. On Extending Temporal Models in Timed Influence Networks

    Science.gov (United States)

    2009-06-01

    among variables in a system. A situation where the impact of a variable takes some time to reach the affected variable(s) cannot be modeled by either of...A1 A4 [h11(1) = 0.99, h11(0) = -0.99] [h12(1) = 0.90, h12 (0) = 0] [ h13 (1) = 0, h13 (0) = -0.90] [h14(1) =- 0.90, h14(0...the corresponding )( 1 11 xh and )( 2 12 xh . The posterior probability of B captures the impact of an affecting event on B and can be plotted as a

  6. Modelling carbon and nitrogen turnover in variably saturated soils

    Science.gov (United States)

    Batlle-Aguilar, J.; Brovelli, A.; Porporato, A.; Barry, D. A.

    2009-04-01

    Natural ecosystems provide services such as ameliorating the impacts of deleterious human activities on both surface and groundwater. For example, several studies have shown that a healthy riparian ecosystem can reduce the nutrient loading of agricultural wastewater, thus protecting the receiving surface water body. As a result, in order to develop better protection strategies and/or restore natural conditions, there is a growing interest in understanding ecosystem functioning, including feedbacks and nonlinearities. Biogeochemical transformations in soils are heavily influenced by microbial decomposition of soil organic matter. Carbon and nutrient cycles are in turn strongly sensitive to environmental conditions, and primarily to soil moisture and temperature. These two physical variables affect the reaction rates of almost all soil biogeochemical transformations, including microbial and fungal activity, nutrient uptake and release from plants, etc. Soil water saturation and temperature are not constants, but vary both in space and time, thus further complicating the picture. In order to interpret field experiments and elucidate the different mechanisms taking place, numerical tools are beneficial. In this work we developed a 3D numerical reactive-transport model as an aid in the investigation the complex physical, chemical and biological interactions occurring in soils. The new code couples the USGS models (MODFLOW 2000-VSF, MT3DMS and PHREEQC) using an operator-splitting algorithm, and is a further development an existing reactive/density-dependent flow model PHWAT. The model was tested using simplified test cases. Following verification, a process-based biogeochemical reaction network describing the turnover of carbon and nitrogen in soils was implemented. Using this tool, we investigated the coupled effect of moisture content and temperature fluctuations on nitrogen and organic matter cycling in the riparian zone, in order to help understand the relative

  7. Mediterranean climate modelling: variability and climate change scenarios

    International Nuclear Information System (INIS)

    Somot, S.

    2005-12-01

    Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

  8. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  9. Similar star formation rate and metallicity variability time-scales drive the fundamental metallicity relation

    Science.gov (United States)

    Torrey, Paul; Vogelsberger, Mark; Hernquist, Lars; McKinnon, Ryan; Marinacci, Federico; Simcoe, Robert A.; Springel, Volker; Pillepich, Annalisa; Naiman, Jill; Pakmor, Rüdiger; Weinberger, Rainer; Nelson, Dylan; Genel, Shy

    2018-06-01

    The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we use hydrodynamical simulations to quantify the time-scales over which populations of galaxies oscillate about the average SFR and metallicity values at fixed stellar mass. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR oscillate over similar time-scales, are often anticorrelated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. Our models indicate that galaxies oscillate about equilibrium SFR and metallicity values - set by the galaxy's stellar mass - and that SFR and metallicity offsets evolve in an anticorrelated fashion. This anticorrelated variability of the metallicity and SFR offsets drives the existence of the FMR in our models. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by feedback-driven globally bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR - especially at high redshift.

  10. The analytical description of high temperature tensile creep for cavitating materials subjected to time variable loads

    International Nuclear Information System (INIS)

    Bocek, M.

    A phenomenological cavitation model is presented by means of which the life time as well as the creep curve equations can be calculated for cavitating materials subjected to time variable tensile loads. The model precludes the proportionality between the damage A and the damage rate (dA/dt) resp. Both are connected by the life time function tau. The latter is derived from static stress rupture tests and contains the loading conditions. From this model the life fraction rule (LFR) is derived. The model is used to calculate the creep curves of cavitating materials subjected at high temperatures to non-stationary tensile loading conditions. In the present paper the following loading procedures are considered: creep at constant load F and true stress s; creep at linear load increase ((dF/dt)=const) and creep at constant load amplitude cycling (CLAC). For these loading procedures the creep equations for cavitating and non-cavitating specimens are derived. Under comparable conditions the creep rate of cavitating materials are higher than for non-cavitating ones. (author)

  11. Time-variable gravity potential components for optical clock comparisons and the definition of international time scales

    International Nuclear Information System (INIS)

    Voigt, C.; Denker, H.; Timmen, L.

    2016-01-01

    The latest generation of optical atomic clocks is approaching the level of one part in 10 18 in terms of frequency stability and uncertainty. For clock comparisons and the definition of international time scales, a relativistic redshift effect of the clock frequencies has to be taken into account at a corresponding uncertainty level of about 0.1 m 2 s -2 and 0.01 m in terms of gravity potential and height, respectively. Besides the predominant static part of the gravity potential, temporal variations must be considered in order to avoid systematic frequency shifts. Time-variable gravity potential components induced by tides and non-tidal mass redistributions are investigated with regard to the level of one part in 10 18 . The magnitudes and dominant time periods of the individual gravity potential contributions are investigated globally and for specific laboratory sites together with the related uncertainty estimates. The basics of the computation methods are presented along with the applied models, data sets and software. Solid Earth tides contribute by far the most dominant signal with a global maximum amplitude of 4.2 m 2 s -2 for the potential and a range (maximum-to-minimum) of up to 1.3 and 10.0 m 2 s -2 in terms of potential differences between specific laboratories over continental and intercontinental scales, respectively. Amplitudes of the ocean tidal loading potential can amount up to 1.25 m 2 s -2 , while the range of the potential between specific laboratories is 0.3 and 1.1 m 2 s -2 over continental and intercontinental scales, respectively. These are the only two contributors being relevant at a 10 -17 level. However, several other time-variable potential effects can particularly affect clock comparisons at the 10 -18 level. Besides solid Earth pole tides, these are non-tidal mass redistributions in the atmosphere, the oceans and the continental water storage. (authors)

  12. Evaluation of Online Log Variables That Estimate Learners' Time Management in a Korean Online Learning Context

    Science.gov (United States)

    Jo, Il-Hyun; Park, Yeonjeong; Yoon, Meehyun; Sung, Hanall

    2016-01-01

    The purpose of this study was to identify the relationship between the psychological variables and online behavioral patterns of students, collected through a learning management system (LMS). As the psychological variable, time and study environment management (TSEM), one of the sub-constructs of MSLQ, was chosen to verify a set of time-related…

  13. Constructing Proxy Variables to Measure Adult Learners' Time Management Strategies in LMS

    Science.gov (United States)

    Jo, Il-Hyun; Kim, Dongho; Yoon, Meehyun

    2015-01-01

    This study describes the process of constructing proxy variables from recorded log data within a Learning Management System (LMS), which represents adult learners' time management strategies in an online course. Based on previous research, three variables of total login time, login frequency, and regularity of login interval were selected as…

  14. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  15. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  16. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  17. Space-time modeling of electricity spot prices

    DEFF Research Database (Denmark)

    Abate, Girum Dagnachew; Haldrup, Niels

    In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... in the spot price dynamics. Estimation of the spatial Durbin model show that the spatial lag variable is as important as the temporal lag variable in describing the spot price dynamics. We use the partial derivatives impact approach to decompose the price impacts into direct and indirect effects and we show...... that price effects transmit to neighboring markets and decline with distance. In order to examine the evolution of the spatial correlation over time, a time varying parameters spot price spatial Durbin model is estimated using recursive estimation. It is found that the spatial correlation within the Nord...

  18. Between-centre variability versus variability over time in DXA whole body measurements evaluated using a whole body phantom

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Olivia [Department of Radiology, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)]. E-mail: olivia.louis@az.vub.ac.be; Verlinde, Siska [Belgian Study Group for Pediatric Endocrinology (Belgium); Thomas, Muriel [Belgian Study Group for Pediatric Endocrinology (Belgium); De Schepper, Jean [Department of Pediatrics, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)

    2006-06-15

    This study aimed to compare the variability of whole body measurements, using dual energy X-ray absorptiometry (DXA), among geographically distinct centres versus that over time in a given centre. A Hologic-designed 28 kg modular whole body phantom was used, including high density polyethylene, gray polyvinylchloride and aluminium. It was scanned on seven Hologic QDR 4500 DXA devices, located in seven centres and was also repeatedly (n = 18) scanned in the reference centre, over a time span of 5 months. The mean between-centre coefficient of variation (CV) ranged from 2.0 (lean mass) to 5.6% (fat mass) while the mean within-centre CV ranged from 0.3 (total mass) to 4.7% (total area). Between-centre variability compared well with within-centre variability for total area, bone mineral content and bone mineral density, but was significantly higher for fat (p < 0.001), lean (p < 0.005) and total mass (p < 0.001). Our results suggest that, even when using the same device, the between-centre variability remains a matter of concern, particularly where body composition is concerned.

  19. Ocean carbon and heat variability in an Earth System Model

    Science.gov (United States)

    Thomas, J. L.; Waugh, D.; Gnanadesikan, A.

    2016-12-01

    Ocean carbon and heat content are very important for regulating global climate. Furthermore, due to lack of observations and dependence on parameterizations, there has been little consensus in the modeling community on the magnitude of realistic ocean carbon and heat content variability, particularly in the Southern Ocean. We assess the differences between global oceanic heat and carbon content variability in GFDL ESM2Mc using a 500-year, pre-industrial control simulation. The global carbon and heat content are directly out of phase with each other; however, in the Southern Ocean the heat and carbon content are in phase. The global heat mutli-decadal variability is primarily explained by variability in the tropics and mid-latitudes, while the variability in global carbon content is primarily explained by Southern Ocean variability. In order to test the robustness of this relationship, we use three additional pre-industrial control simulations using different mesoscale mixing parameterizations. Three pre-industrial control simulations are conducted with the along-isopycnal diffusion coefficient (Aredi) set to constant values of 400, 800 (control) and 2400 m2 s-1. These values for Aredi are within the range of parameter settings commonly used in modeling groups. Finally, one pre-industrial control simulation is conducted where the minimum in the Gent-McWilliams parameterization closure scheme (AGM) increased to 600 m2 s-1. We find that the different simulations have very different multi-decadal variability, especially in the Weddell Sea where the characteristics of deep convection are drastically changed. While the temporal frequency and amplitude global heat and carbon content changes significantly, the overall spatial pattern of variability remains unchanged between the simulations.

  20. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  1. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  2. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  3. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    Science.gov (United States)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  4. Multi-Objective Flexible Flow Shop Scheduling Problem Considering Variable Processing Time due to Renewable Energy

    Directory of Open Access Journals (Sweden)

    Xiuli Wu

    2018-03-01

    Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.

  5. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

  6. Selective attrition and intraindividual variability in response time moderate cognitive change.

    Science.gov (United States)

    Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S

    2016-01-01

    Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.

  7. Internal variability in a regional climate model over West Africa

    Energy Technology Data Exchange (ETDEWEB)

    Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

    2008-02-15

    Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

  8. Automatic Welding Control Using a State Variable Model.

    Science.gov (United States)

    1979-06-01

    A-A10 610 NAVEAL POSTGRADUATE SCH4O.M CEAY CA0/ 13/ SAUTOMATIC WELDING CONTROL USING A STATE VARIABLE MODEL.W()JUN 79 W V "my UNCLASSIFIED...taverse Drive Unit // Jbint Path /Fixed Track 34 (servomotor positioning). Additional controls of heave (vertical), roll (angular rotation about the

  9. Viscous cosmological models with a variable cosmological term ...

    African Journals Online (AJOL)

    Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

  10. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  11. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ...

  12. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    Science.gov (United States)

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for

  13. Instrumental variables estimation under a structural Cox model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

    2017-01-01

    Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...

  14. Oscillating shells: A model for a variable cosmic object

    OpenAIRE

    Nunez, Dario

    1997-01-01

    A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.

  15. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  16. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Online Synthesis for Operation Execution Time Variability on Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2014-01-01

    have assumed that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcets. In this paper we propose...... an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, obtaining thus shorter application execution times. The proposed strategy has been evaluated using several benchmarks....

  18. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  19. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy

    Directory of Open Access Journals (Sweden)

    Augusto C. M. Souza

    2018-04-01

    Full Text Available While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae, at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  20. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy.

    Science.gov (United States)

    Souza, Augusto C M; Mousaviraad, Mohammad; Mapoka, Kenneth O M; Rosentrater, Kurt A

    2018-04-24

    While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae , at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  1. The use of ZIP and CART to model cryptosporidiosis in relation to climatic variables.

    Science.gov (United States)

    Hu, Wenbiao; Mengersen, Kerrie; Fu, Shiu-Yun; Tong, Shilu

    2010-07-01

    This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31 degrees C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9-47.9). These findings may have applications as a decision support tool in planning disease control and risk-management programs for cryptosporidiosis disease.

  2. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  3. Excitation of Earth Rotation Variations "Observed" by Time-Variable Gravity

    Science.gov (United States)

    Chao, Ben F.; Cox, C. M.

    2005-01-01

    Time variable gravity measurements have been made over the past two decades using the space geodetic technique of satellite laser ranging, and more recently by the GRACE satellite mission with improved spatial resolutions. The degree-2 harmonic components of the time-variable gravity contain important information about the Earth s length-of-day and polar motion excitation functions, in a way independent to the traditional "direct" Earth rotation measurements made by, for example, the very-long-baseline interferometry and GPS. In particular, the (degree=2, order= 1) components give the mass term of the polar motion excitation; the (2,O) component, under certain mass conservation conditions, gives the mass term of the length-of-day excitation. Combining these with yet another independent source of angular momentum estimation calculated from global geophysical fluid models (for example the atmospheric angular momentum, in both mass and motion terms), in principle can lead to new insights into the dynamics, particularly the role or the lack thereof of the cores, in the excitation processes of the Earth rotation variations.

  4. Dissociating neural variability related to stimulus quality and response times in perceptual decision-making.

    Science.gov (United States)

    Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten

    2018-03-01

    According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  6. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  7. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  8. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Universe before Planck time: A quantum gravity model

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1983-01-01

    A model for quantum gravity can be constructed by treating the conformal degree of freedom of spacetime as a quantum variable. An isotropic, homogeneous cosmological solution in this quantum gravity model is presented. The spacetime is nonsingular for all the three possible values of three-space curvature, and agrees with the classical solution for time scales larger than the Planck time scale. A possibility of quantum fluctuations creating the matter in the universe is suggested

  10. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  11. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  12. Age and Sex Differences in Intra-Individual Variability in a Simple Reaction Time Task

    Science.gov (United States)

    Ghisletta, Paolo; Renaud, Olivier; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2018-01-01

    While age effects in reaction time (RT) tasks across the lifespan are well established for level of performance, analogous findings have started appearing also for indicators of intra-individual variability (IIV). Children are not only slower, but also display more variability than younger adults in RT. Yet, little is known about potential…

  13. Influence of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition

    Directory of Open Access Journals (Sweden)

    Guo Zerong

    2016-01-01

    Full Text Available To study the effect of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition, considering the heat transfer coefficient as the power function of temperature, mathematical thermal explosion steady state and unsteady-state model of finite cylindrical fireworks and crackers with complex shell structures are established based on two-dimensional steady state thermal explosion theory. The influence of variable heat transfer coefficient on thermal explosion critical ambient temperature and time to ignition are analyzed. When heat transfer coefficient is changing with temperature and in the condition of natural convection heat transfer, critical ambient temperature lessen, thermal explosion time to ignition shorten. If ambient temperature is close to critical ambient temperature, the influence of variable heat transfer coefficient on time to ignition become large. For firework with inner barrel in example analysis, the critical ambient temperature of propellant is 463.88 K and the time to ignition is 4054.9s at 466 K, 0.26 K and 450.8s less than without considering the change of heat transfer coefficient respectively. The calculation results show that the influence of variable heat transfer coefficient on thermal explosion time to ignition is greater in this example. Therefore, the effect of variable heat transfer coefficient should be considered into thermal safety evaluation of fireworks to reduce potential safety hazard.

  14. Pre-quantum mechanics. Introduction to models with hidden variables

    International Nuclear Information System (INIS)

    Grea, J.

    1976-01-01

    Within the context of formalism of hidden variable type, the author considers the models used to describe mechanical systems before the introduction of the quantum model. An account is given of the characteristics of the theoretical models and their relationships with experimental methodology. The models of analytical, pre-ergodic, stochastic and thermodynamic mechanics are studied in succession. At each stage the physical hypothesis is enunciated by postulate corresponding to the type of description of the reality of the model. Starting from this postulate, the physical propositions which are meaningful for the model under consideration are defined and their logical structure is indicated. It is then found that on passing from one level of description to another, one can obtain successively Boolean lattices embedded in lattices of continuous geometric type, which are themselves embedded in Boolean lattices. It is therefore possible to envisage a more detailed description than that given by the quantum lattice and to construct it by analogy. (Auth.)

  15. Time-dependent inhomogeneous jet models for BL Lac objects

    Science.gov (United States)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  16. Time variability of C-reactive protein: implications for clinical risk stratification.

    Directory of Open Access Journals (Sweden)

    Peter Bogaty

    Full Text Available C-reactive protein (CRP is proposed as a screening test for predicting risk and guiding preventive approaches in coronary artery disease (CAD. However, the stability of repeated CRP measurements over time in subjects with and without CAD is not well defined. We sought to determine the stability of serial CRP measurements in stable subjects with distinct CAD manifestations and a group without CAD while carefully controlling for known confounders.We prospectively studied 4 groups of 25 stable subjects each 1 a history of recurrent acute coronary events; 2 a single myocardial infarction ≥7 years ago; 3 longstanding CAD (≥7 years that had never been unstable; 4 no CAD. Fifteen measurements of CRP were obtained to cover 21 time-points: 3 times during one day; 5 consecutive days; 4 consecutive weeks; 4 consecutive months; and every 3 months over the year. CRP risk threshold was set at 2.0 mg/L. We estimated variance across time-points using standard descriptive statistics and Bayesian hierarchical models.Median CRP values of the 4 groups and their pattern of variability did not differ substantially so all subjects were analyzed together. The median individual standard deviation (SD CRP values within-day, within-week, between-weeks and between-months were 0.07, 0.19, 0.36 and 0.63 mg/L, respectively. Forty-six percent of subjects changed CRP risk category at least once and 21% had ≥4 weekly and monthly CRP values in both low and high-risk categories.Considering its large intra-individual variability, it may be problematic to rely on CRP values for CAD risk prediction and therapeutic decision-making in individual subjects.

  17. Fractional derivatives of constant and variable orders applied to anomalous relaxation models in heat transfer problems

    Directory of Open Access Journals (Sweden)

    Yang Xiao-Jun

    2017-01-01

    Full Text Available In this paper, we address a class of the fractional derivatives of constant and variable orders for the first time. Fractional-order relaxation equations of constants and variable orders in the sense of Caputo type are modeled from mathematical view of point. The comparative results of the anomalous relaxation among the various fractional derivatives are also given. They are very efficient in description of the complex phenomenon arising in heat transfer.

  18. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  19. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil

    2014-01-01

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  20. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  1. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

    2000-01-01

    Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

  2. Important variables in explaining real-time peak price in the independent power market of Ontario

    International Nuclear Information System (INIS)

    Rueda, I.E.A.; Marathe, A.

    2005-01-01

    This paper uses support vector machines (SVM) based learning algorithm to select important variables that help explain the real-time peak electricity price in the Ontario market. The Ontario market was opened to competition only in May 2002. Due to the limited number of observations available, finding a set of variables that can explain the independent power market of Ontario (IMO) real-time peak price is a significant challenge for the traders and analysts. The kernel regressions of the explanatory variables on the IMO real-time average peak price show that non-linear dependencies exist between the explanatory variables and the IMO price. This non-linear relationship combined with the low variable-observation ratio rule out conventional statistical analysis. Hence, we use an alternative machine learning technique to find the important explanatory variables for the IMO real-time average peak price. SVM sensitivity analysis based results find that the IMO's predispatch average peak price, the actual import peak volume, the peak load of the Ontario market and the net available supply after accounting for load (energy excess) are some of the most important variables in explaining the real-time average peak price in the Ontario electricity market. (author)

  3. Analysis of Modal Travel Time Variability Due to Mesoscale Ocean Structure

    National Research Council Canada - National Science Library

    Smith, Amy

    1997-01-01

    .... First, for an open ocean environment away from strong boundary currents, the effects of randomly phased linear baroclinic Rossby waves on acoustic travel time are shown to produce a variable overall...

  4. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  5. Mixing times towards demographic equilibrium in insect populations with temperature variable age structures.

    Science.gov (United States)

    Damos, Petros

    2015-08-01

    In this study, we use entropy related mixing rate modules to measure the effects of temperature on insect population stability and demographic breakdown. The uncertainty in the age of the mother of a randomly chosen newborn, and how it is moved after a finite act of time steps, is modeled using a stochastic transformation of the Leslie matrix. Age classes are represented as a cycle graph and its transitions towards the stable age distribution are brought forth as an exact Markov chain. The dynamics of divergence, from a non equilibrium state towards equilibrium, are evaluated using the Kolmogorov-Sinai entropy. Moreover, Kullback-Leibler distance is applied as information-theoretic measure to estimate exact mixing times of age transitions probabilities towards equilibrium. Using empirically data, we show that on the initial conditions and simulated projection's trough time, that population entropy can effectively be applied to detect demographic variability towards equilibrium under different temperature conditions. Changes in entropy are correlated with the fluctuations of the insect population decay rates (i.e. demographic stability towards equilibrium). Moreover, shorter mixing times are directly linked to lower entropy rates and vice versa. This may be linked to the properties of the insect model system, which in contrast to warm blooded animals has the ability to greatly change its metabolic and demographic rates. Moreover, population entropy and the related distance measures that are applied, provide a means to measure these rates. The current results and model projections provide clear biological evidence why dynamic population entropy may be useful to measure population stability. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  7. Speech-discrimination scores modeled as a binomial variable.

    Science.gov (United States)

    Thornton, A R; Raffin, M J

    1978-09-01

    Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

  8. Dissociable effects of practice variability on learning motor and timing skills.

    Science.gov (United States)

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a

  9. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  10. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  11. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  12. Arctic energy budget in relation to sea-ice variability on monthly to annual time scales

    Science.gov (United States)

    Krikken, Folmer; Hazeleger, Wilco

    2015-04-01

    The strong decrease in Arctic sea-ice in recent years has triggered a strong interest in Arctic sea-ice predictions on seasonal to decadal time scales. Hence, it is key to understand physical processes that provide enhanced predictability beyond persistence of sea ice anomalies. The authors report on an analysis of natural variability of Arctic sea-ice from an energy budget perspective, using 15 CMIP5 climate models, and comparing these results to atmospheric and oceanic reanalyses data. We quantify the persistence of sea ice anomalies and the cross-correlation with the surface and top energy budget components. The Arctic energy balance components primarily indicate the important role of the seasonal sea-ice albedo feedback, in which sea-ice anomalies in the melt season reemerge in the growth season. This is a robust anomaly reemergence mechanism among all 15 climate models. The role of ocean lies mainly in storing heat content anomalies in spring, and releasing them in autumn. Ocean heat flux variations only play a minor role. The role of clouds is further investigated. We demonstrate that there is no direct atmospheric response of clouds to spring sea-ice anomalies, but a delayed response is evident in autumn. Hence, there is no cloud-ice feedback in late spring and summer, but there is a cloud-ice feedback in autumn, which strengthens the ice-albedo feedback. Anomalies in insolation are positively correlated with sea-ice variability. This is primarily a result of reduced multiple-reflection of insolation due to an albedo decrease. This effect counteracts the sea-ice albedo effect up to 50%. ERA-Interim and ORAS4 confirm the main findings from the climate models.

  13. Time-variable gravity fields and ocean mass change from 37 months of kinematic Swarm orbits

    Science.gov (United States)

    Lück, Christina; Kusche, Jürgen; Rietbroek, Roelof; Löcher, Anno

    2018-03-01

    Measuring the spatiotemporal variation of ocean mass allows for partitioning of volumetric sea level change, sampled by radar altimeters, into mass-driven and steric parts. The latter is related to ocean heat change and the current Earth's energy imbalance. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) mission has provided monthly snapshots of the Earth's time-variable gravity field, from which one can derive ocean mass variability. However, GRACE has reached the end of its lifetime with data degradation and several gaps occurred during the last years, and there will be a prolonged gap until the launch of the follow-on mission GRACE-FO. Therefore, efforts focus on generating a long and consistent ocean mass time series by analyzing kinematic orbits from other low-flying satellites, i.e. extending the GRACE time series. Here we utilize data from the European Space Agency's (ESA) Swarm Earth Explorer satellites to derive and investigate ocean mass variations. For this aim, we use the integral equation approach with short arcs (Mayer-Gürr, 2006) to compute more than 500 time-variable gravity fields with different parameterizations from kinematic orbits. We investigate the potential to bridge the gap between the GRACE and the GRACE-FO mission and to substitute missing monthly solutions with Swarm results of significantly lower resolution. Our monthly Swarm solutions have a root mean square error (RMSE) of 4.0 mm with respect to GRACE, whereas directly estimating constant, trend, annual, and semiannual (CTAS) signal terms leads to an RMSE of only 1.7 mm. Concerning monthly gaps, our CTAS Swarm solution appears better than interpolating existing GRACE data in 13.5 % of all cases, when artificially removing one solution. In the case of an 18-month artificial gap, 80.0 % of all CTAS Swarm solutions were found closer to the observed GRACE data compared to interpolated GRACE data. Furthermore, we show that precise modeling of non-gravitational forces

  14. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  15. Are revised models better models? A skill score assessment of regional interannual variability

    Science.gov (United States)

    Sperber, Kenneth R.; Participating AMIP Modelling Groups

    1999-05-01

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  16. Change in intraindividual variability over time as a key metric for defining performance-based cognitive fatigability.

    Science.gov (United States)

    Wang, Chao; Ding, Mingzhou; Kluger, Benzi M

    2014-03-01

    Cognitive fatigability is conventionally quantified as the increase over time in either mean reaction time (RT) or error rate from two or more time periods during sustained performance of a prolonged cognitive task. There is evidence indicating that these mean performance measures may not sufficiently reflect the response characteristics of cognitive fatigue. We hypothesized that changes in intraindividual variability over time would be a more sensitive and ecologically meaningful metric for investigations of fatigability of cognitive performance. To test the hypothesis fifteen young adults were recruited. Trait fatigue perceptions in various domains were assessed with the Multidimensional Fatigue Index (MFI). Behavioral data were then recorded during performance of a three-hour continuous cued Stroop task. Results showed that intraindividual variability, as quantified by the coefficient of variation of RT, increased linearly over the course of three hours and demonstrated a significantly greater effect size than mean RT or accuracy. Change in intraindividual RT variability over time was significantly correlated with relevant subscores of the MFI including reduced activity, reduced motivation and mental fatigue. While change in mean RT over time was also correlated with reduced motivation and mental fatigue, these correlations were significantly smaller than those associated with intraindividual RT variability. RT distribution analysis using an ex-Gaussian model further revealed that change in intraindividual variability over time reflects an increase in the exponential component of variance and may reflect attentional lapses or other breakdowns in cognitive control. These results suggest that intraindividual variability and its change over time provide important metrics for measuring cognitive fatigability and may prove useful for inferring the underlying neuronal mechanisms of both perceptions of fatigue and objective changes in performance. Copyright © 2014

  17. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    function. The MLRPS is more complex than the variable flow model as it has to cater for a force structure that is much larger than just the MT branch...essential positions in a Ship’s complement, or by the biggest current deficit in forecast end strength. The model can be adjusted to cater for any of these...is unlikely that the RAN will be able to cater for such an increase in hires, so this scenario is not likely to solve their problem. Each transition

  18. Variable sound speed in interacting dark energy models

    Science.gov (United States)

    Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy

    2018-04-01

    We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.

  19. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    Science.gov (United States)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  20. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  1. Time-of-flight depth image enhancement using variable integration time

    Science.gov (United States)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  2. Variable School Start Times and Middle School Student's Sleep Health and Academic Performance.

    Science.gov (United States)

    Lewin, Daniel S; Wang, Guanghai; Chen, Yao I; Skora, Elizabeth; Hoehn, Jessica; Baylor, Allison; Wang, Jichuan

    2017-08-01

    Improving sleep health among adolescents is a national health priority and implementing healthy school start times (SSTs) is an important strategy to achieve these goals. This study leveraged the differences in middle school SST in a large district to evaluate associations between SST, sleep health, and academic performance. This cross-sectional study draws data from a county-wide surveillance survey. Participants were three cohorts of eighth graders (n = 26,440). The school district is unique because SST ranged from 7:20 a.m. to 8:10 a.m. Path analysis and probit regression were used to analyze associations between SST and self-report measures of weekday sleep duration, grades, and homework controlling for demographic variables (sex, race, and socioeconomic status). The independent contributions of SST and sleep duration to academic performance were also analyzed. Earlier SST was associated with decreased sleep duration (χ 2  = 173, p academic performance, and academic effort. Path analysis models demonstrated the independent contributions of sleep duration, SST, and variable effects for demographic variables. This is the first study to evaluate the independent contributions of SST and sleep to academic performance in a large sample of middle school students. Deficient sleep was prevalent, and the earliest SST was associated with decrements in sleep and academics. These findings support the prioritization of policy initiatives to implement healthy SST for younger adolescents and highlight the importance of sleep health education disparities among race and gender groups. Copyright © 2017 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  3. Bootstrapping a time series model

    International Nuclear Information System (INIS)

    Son, M.S.

    1984-01-01

    The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this dissertation was to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates and multi-period forecasts in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. A secondary objective of this article was to present the bootstrap in the context of two econometric equations describing the unemployment rate and individual income tax in the state of Oklahoma. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems: 1) the first two observations y 1 and y 2 have been fixed, and 2) the residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well. At present, there does not seem to be a good rule of thumb for deciding when the conventional asymptotic formulae will give acceptable results

  4. Quantifying intrinsic and extrinsic variability in stochastic gene expression models.

    Science.gov (United States)

    Singh, Abhyudai; Soltani, Mohammad

    2013-01-01

    Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.

  5. Predictor variables for a half marathon race time in recreational male runners.

    Science.gov (United States)

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Lepers, Romuald; Rosemann, Thomas

    2011-01-01

    The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the 'Half Marathon Basel' completed the race distance within (mean and standard deviation, SD) 103.9 (16.5) min, running at a speed of 12.7 (1.9) km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56), suprailiacal (r = 0.36) and medial calf skin fold (r = 0.53) were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = -0.54) were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150) and speed in running during training (P = 0.0045) were related to race time. Race time in a half marathon might be partially predicted by the following equation (r(2) = 0.44): Race time (min) = 72.91 + 3.045 * (body mass index, kg/m(2)) -3.884 * (speed in running during training, km/h) for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.

  6. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  7. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  8. Modeling key processes causing climate change and variability

    Energy Technology Data Exchange (ETDEWEB)

    Henriksson, S.

    2013-09-01

    Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

  9. A stochastic surplus production model in continuous time

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte

    2017-01-01

    surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...

  10. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  11. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  12. Modelling the Spatial Isotope Variability of Precipitation in Syria

    Energy Technology Data Exchange (ETDEWEB)

    Kattan, Z.; Kattaa, B. [Department of Geology, Atomic Energy Commission of Syria (AECS), Damascus (Syrian Arab Republic)

    2013-07-15

    Attempts were made to model the spatial variability of environmental isotope ({sup 18}O, {sup 2}H and {sup 3}H) compositions of precipitation in syria. Rainfall samples periodically collected on a monthly basis from 16 different stations were used for processing and demonstrating the spatial distributions of these isotopes, together with those of deuterium excess (d) values. Mathematically, the modelling process was based on applying simple polynomial models that take into consideration the effects of major geographic factors (Lon.E., Lat.N., and altitude). The modelling results of spatial distribution of stable isotopes ({sup 18}O and {sup 2}H) were generally good, as shown from the high correlation coefficients (R{sup 2} = 0.7-0.8), calculated between the observed and predicted values. In the case of deuterium excess and tritium distributions, the results were most likely approximates (R{sup 2} = 0.5-0.6). Improving the simulation of spatial isotope variability probably requires the incorporation of other local meteorological factors, such as relative air humidity, precipitation amount and vapour pressure, which are supposed to play an important role in such an arid country. (author)

  13. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  14. Seychelles Dome variability in a high resolution ocean model

    Science.gov (United States)

    Nyadjro, E. S.; Jensen, T.; Richman, J. G.; Shriver, J. F.

    2016-02-01

    The Seychelles-Chagos Thermocline Ridge (SCTR; 5ºS-10ºS, 50ºE-80ºE) in the tropical Southwest Indian Ocean (SWIO) has been recognized as a region of prominence with regards to climate variability in the Indian Ocean. Convective activities in this region have regional consequences as it affect socio-economic livelihood of the people especially in the countries along the Indian Ocean rim. The SCTR is characterized by a quasi-permanent upwelling that is often associated with thermocline shoaling. This upwelling affects sea surface temperature (SST) variability. We present results on the variability and dynamics of the SCTR as simulated by the 1/12º high resolution HYbrid Coordinate Ocean Model (HYCOM). It is observed that locally, wind stress affects SST via Ekman pumping of cooler subsurface waters, mixing and anomalous zonal advection. Remotely, wind stress curl in the eastern equatorial Indian Ocean generates westward-propagating Rossby waves that impacts the depth of the thermocline which in turn impacts SST variability in the SCTR region. The variability of the contributions of these processes, especially with regard to the Indian Ocean Dipole (IOD) are further examined. In a typical positive IOD (PIOD) year, the net vertical velocity in the SCTR is negative year-round as easterlies along the region are intensified leading to a strong positive curl. This vertical velocity is caused mainly by anomalous local Ekman downwelling (with peak during September-November), a direct opposite to the climatology scenario when local Ekman pumping is positive (upwelling favorable) year-round. The anomalous remote contribution to the vertical velocity changes is minimal especially during the developing and peak stages of PIOD events. In a typical negative IOD (NIOD) year, anomalous vertical velocity is positive almost year-round with peaks in May and October. The remote contribution is positive, in contrast to the climatology and most of the PIOD years.

  15. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  16. Geospatial models of climatological variables distribution over Colombian territory

    International Nuclear Information System (INIS)

    Baron Leguizamon, Alicia

    2003-01-01

    Diverse studies have dealt on the existing relation between the variables temperature about the air and precipitation with the altitude; nevertheless they have been precise analyses or by regions, but no of them has gotten to constitute itself in a tool that reproduces the space distribution, of the temperature or the precipitation, taking into account orography and allowing to obtain from her data on these variables in a certain place. Cradle in the raised relation and from the multi-annual monthly information of the temperature of the air and the precipitation, it was calculated the vertical gradients of temperature and the related the precipitation to the altitude. After it, with base in the data of altitude provided by the DEM, one calculated the values of temperature and precipitation, and those values were interpolated to generate geospatial models monthly

  17. Real-time variables dictionary (RTVD), and expert system for development of real-time applications in nuclear power plants

    International Nuclear Information System (INIS)

    Senra Martinez, A.; Schirru, R.; Dutra Thome Filho, Z.

    1990-01-01

    It is presented in this paper a computerized methodology based on a data dictionary managed by an expert system called Real-Time Variables Dictionary (RTVD). This system is very usefull for development of real-time applications in nuclear power plants. It is described in details the RTVD functions and its implantation in a VAX 8600 computer. It is also pointed out the concepts of artificial intelligence used in teh RTVD

  18. Stability of Delayed Hopfield Neural Networks with Variable-Time Impulses

    Directory of Open Access Journals (Sweden)

    Yangjun Pei

    2014-01-01

    Full Text Available In this paper the globally exponential stability criteria of delayed Hopfield neural networks with variable-time impulses are established. The proposed criteria can also be applied in Hopfield neural networks with fixed-time impulses. A numerical example is presented to illustrate the effectiveness of our theoretical results.

  19. Lyapunov-based constrained engine torque control using electronic throttle and variable cam timing

    NARCIS (Netherlands)

    Feru, E.; Lazar, M.; Gielen, R.H.; Kolmanovsky, I.V.; Di Cairano, S.

    2012-01-01

    In this paper, predictive control of a spark ignition engine equipped with an electronic throttle and a variable cam timing actuator is considered. The objective is to adjust the throttle angle and the engine cam timing in order to reduce the exhaust gas emissions while maintaining fast and

  20. Norm-times : a design for production time and variability reduction for Faes Cases

    NARCIS (Netherlands)

    Karandeinos, Georgios

    2008-01-01

    This project deals with the production process of Faes Cases business unit. This company is producing custom-made packaging and sells standard solutions with customized interior. During the last years, it was observed that the throughput time of the production is increasing and is hard to forecast

  1. Simulating variable-density flows with time-consistent integration of Navier-Stokes equations

    Science.gov (United States)

    Lu, Xiaoyi; Pantano, Carlos

    2017-11-01

    In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.

  2. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  3. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  4. Variability in Benthic Exchange Rate, Depth, and Residence Time Beneath a Shallow Coastal Estuary

    Science.gov (United States)

    Russoniello, Christopher J.; Heiss, James W.; Michael, Holly A.

    2018-03-01

    Hydrodynamically driven benthic exchange of water between the water column and shallow seabed aquifer is a significant and dynamic component of coastal and estuarine fluid budgets. Associated exchange of solutes promotes ecologically important chemical reactions, so quantifying benthic exchange rates, depths, and residence times constrains coastal chemical cycling estimates. We present the first combined field, numerical, and analytical modeling investigation of wave-induced exchange. Temporal variability of exchange was calculated with data collected by instruments deployed in a shallow estuary for 11 days. Differential pressure sensors recorded pressure gradients across the seabed, and up- and down-looking ADCPs recorded currents and pressures to determine wave parameters, surface-water currents, and water depth. Wave-induced exchange was calculated (1) directly from differential pressure measurements, and indirectly with an analytical model based on wave parameters from (2) ADCP and (3) wind data. Wave-induced exchange from pressure measurements and ADCP-measured wave parameters matched well, but both exceeded wind-based values. Exchange induced by tidal pumping and current-bed form interaction—the other primary drivers in shallow coastal waters were calculated from tidal stage variation and ADCP-measured currents. Exchange from waves (mean = 20.0 cm/d; range = 1.75-92.3 cm/d) greatly exceeded exchange due to tides (mean = 3.7 cm/d) and current-bed form interaction (mean = 6.5 × 10-2 cm/d). Groundwater flow models showed aquifer properties affect wave-driven benthic exchange: residence time and depth increased and exchange rates decreased with increasing hydraulic diffusivity (ratio of aquifer permeability to compressibility). This new understanding of benthic exchange will help managers assess its control over chemical fluxes to marine systems.

  5. Energy decay of a variable-coefficient wave equation with nonlinear time-dependent localized damping

    Directory of Open Access Journals (Sweden)

    Jieqiong Wu

    2015-09-01

    Full Text Available We study the energy decay for the Cauchy problem of the wave equation with nonlinear time-dependent and space-dependent damping. The damping is localized in a bounded domain and near infinity, and the principal part of the wave equation has a variable-coefficient. We apply the multiplier method for variable-coefficient equations, and obtain an energy decay that depends on the property of the coefficient of the damping term.

  6. Reaction Time Variability in Children With ADHD Symptoms and/or Dyslexia

    OpenAIRE

    Gooch, Debbie; Snowling, Margaret J.; Hulme, Charles

    2012-01-01

    Reaction time (RT) variability on a Stop Signal task was examined among children with attention deficit hyperactivity disorder (ADHD) symptoms and/or dyslexia in comparison to typically developing (TD) controls. Children’s go-trial RTs were analyzed using a novel ex-Gaussian method. Children with ADHD symptoms had increased variability in the fast but not the slow portions of their RT distributions compared to those without ADHD symptoms. The RT distributions of children with d...

  7. Competency-Based, Time-Variable Education in the Health Professions: Crossroads.

    Science.gov (United States)

    Lucey, Catherine R; Thibault, George E; Ten Cate, Olle

    2018-03-01

    Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.

  8. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  9. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  10. Remote sensing of the Canadian Arctic: Modelling biophysical variables

    Science.gov (United States)

    Liu, Nanfeng

    It is anticipated that Arctic vegetation will respond in a variety of ways to altered temperature and precipitation patterns expected with climate change, including changes in phenology, productivity, biomass, cover and net ecosystem exchange. Remote sensing provides data and data processing methodologies for monitoring and assessing Arctic vegetation over large areas. The goal of this research was to explore the potential of hyperspectral and high spatial resolution multispectral remote sensing data for modelling two important Arctic biophysical variables: Percent Vegetation Cover (PVC) and the fraction of Absorbed Photosynthetically Active Radiation (fAPAR). A series of field experiments were conducted to collect PVC and fAPAR at three Canadian Arctic sites: (1) Sabine Peninsula, Melville Island, NU; (2) Cape Bounty Arctic Watershed Observatory (CBAWO), Melville Island, NU; and (3) Apex River Watershed (ARW), Baffin Island, NU. Linear relationships between biophysical variables and Vegetation Indices (VIs) were examined at different spatial scales using field spectra (for the Sabine Peninsula site) and high spatial resolution satellite data (for the CBAWO and ARW sites). At the Sabine Peninsula site, hyperspectral VIs exhibited a better performance for modelling PVC than multispectral VIs due to their capacity for sampling fine spectral features. The optimal hyperspectral bands were located at important spectral features observed in Arctic vegetation spectra, including leaf pigment absorption in the red wavelengths and at the red-edge, leaf water absorption in the near infrared, and leaf cellulose and lignin absorption in the shortwave infrared. At the CBAWO and ARW sites, field PVC and fAPAR exhibited strong correlations (R2 > 0.70) with the NDVI (Normalized Difference Vegetation Index) derived from high-resolution WorldView-2 data. Similarly, high spatial resolution satellite-derived fAPAR was correlated to MODIS fAPAR (R2 = 0.68), with a systematic

  11. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    , although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...

  12. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  13. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  14. Variable recruitment fluidic artificial muscles: modeling and experiments

    International Nuclear Information System (INIS)

    Bryant, Matthew; Meller, Michael A; Garcia, Ephrahim

    2014-01-01

    We investigate taking advantage of the lightweight, compliant nature of fluidic artificial muscles to create variable recruitment actuators in the form of artificial muscle bundles. Several actuator elements at different diameter scales are packaged to act as a single actuator device. The actuator elements of the bundle can be connected to the fluidic control circuit so that different groups of actuator elements, much like individual muscle fibers, can be activated independently depending on the required force output and motion. This novel actuation concept allows us to save energy by effectively impedance matching the active size of the actuators on the fly based on the instantaneous required load. This design also allows a single bundled actuator to operate in substantially different force regimes, which could be valuable for robots that need to perform a wide variety of tasks and interact safely with humans. This paper proposes, models and analyzes the actuation efficiency of this actuator concept. The analysis shows that variable recruitment operation can create an actuator that reduces throttling valve losses to operate more efficiently over a broader range of its force–strain operating space. We also present preliminary results of the design, fabrication and experimental characterization of three such bioinspired variable recruitment actuator prototypes. (paper)

  15. Uncertainty importance measure for models with correlated normal variables

    International Nuclear Information System (INIS)

    Hao, Wenrui; Lu, Zhenzhou; Wei, Pengfei

    2013-01-01

    In order to explore the contributions by correlated input variables to the variance of the model output, the contribution decomposition of the correlated input variables based on Mara's definition is investigated in detail. By taking the quadratic polynomial output without cross term as an illustration, the solution of the contribution decomposition is derived analytically using the statistical inference theory. After the correction of the analytical solution is validated by the numerical examples, they are employed to two engineering examples to show their wide application. The derived analytical solutions can directly be used to recognize the contributions by the correlated input variables in case of the quadratic or linear polynomial output without cross term, and the analytical inference method can be extended to the case of higher order polynomial output. Additionally, the origins of the interaction contribution of the correlated inputs are analyzed, and the comparisons of the existing contribution indices are completed, on which the engineer can select the suitable indices to know the necessary information. At last, the degeneration of the correlated inputs to the uncorrelated ones and some computational issues are discussed in concept

  16. Investigation of clinical pharmacokinetic variability of an opioid antagonist through physiologically based absorption modeling.

    Science.gov (United States)

    Ding, Xuan; He, Minxia; Kulkarni, Rajesh; Patel, Nita; Zhang, Xiaoyu

    2013-08-01

    Identifying the source of inter- and/or intrasubject variability in pharmacokinetics (PK) provides fundamental information in understanding the pharmacokinetics-pharmacodynamics relationship of a drug and project its efficacy and safety in clinical populations. This identification process can be challenging given that a large number of potential causes could lead to PK variability. Here we present an integrated approach of physiologically based absorption modeling to investigate the root cause of unexpectedly high PK variability of a Phase I clinical trial drug. LY2196044 exhibited high intersubject variability in the absorption phase of plasma concentration-time profiles in humans. This could not be explained by in vitro measurements of drug properties and excellent bioavailability with low variability observed in preclinical species. GastroPlus™ modeling suggested that the compound's optimal solubility and permeability characteristics would enable rapid and complete absorption in preclinical species and in humans. However, simulations of human plasma concentration-time profiles indicated that despite sufficient solubility and rapid dissolution of LY2196044 in humans, permeability and/or transit in the gastrointestinal (GI) tract may have been negatively affected. It was concluded that clinical PK variability was potentially due to the drug's antagonism on opioid receptors that affected its transit and absorption in the GI tract. Copyright © 2013 Wiley Periodicals, Inc.

  17. Changes in Southern Hemisphere circulation variability in climate change modelling experiments

    International Nuclear Information System (INIS)

    Grainger, Simon; Frederiksen, Carsten; Zheng, Xiaogu

    2007-01-01

    Full text: The seasonal mean of a climate variable can be considered as a statistical random variable, consisting of a signal and noise components (Madden 1976). The noise component consists of internal intraseasonal variability, and is not predictable on time-scales of a season or more ahead. The signal consists of slowly varying external and internal variability, and is potentially predictable on seasonal time-scales. The method of Zheng and Frederiksen (2004) has been applied to monthly time series of 500hPa Geopotential height from models submitted to the Coupled Model Intercomparison Project (CMIP3) experiment to obtain covariance matrices of the intraseasonal and slow components of covariability for summer and winter. The Empirical Orthogonal Functions (EOFs) of the intraseasonal and slow covariance matrices for the second half of the 20th century are compared with those observed by Frederiksen and Zheng (2007). The leading EOF in summer and winter for both the intraseasonal and slow components of covariability is the Southern Annular Mode (see, e.g. Kiladis and Mo 1998). This is generally reproduced by the CMIP3 models, although with different variance amounts. The observed secondary intraseasonal covariability modes of wave 4 patterns in summer and wave 3 or blocking in winter are also generally seen in the models, although the actual spatial pattern is different. For the slow covariabilty, the models are less successful in reproducing the two observed ENSO modes, with generally only one of them being represented among the leading EOFs. However, most models reproduce the observed South Pacific wave pattern. The intraseasonal and slow covariances matrices of 500hPa geopotential height under three climate change scenarios are also analysed and compared with those found for the second half of the 20th century. Through aggregating the results from a number of CMIP3 models, a consensus estimate of the changes in Southern Hemisphere variability, and their

  18. Time-dependence in relativistic collisionless shocks: theory of the variable

    Energy Technology Data Exchange (ETDEWEB)

    Spitkovsky, A

    2004-02-05

    We describe results from time-dependent numerical modeling of the collisionless reverse shock terminating the pulsar wind in the Crab Nebula. We treat the upstream relativistic wind as composed of ions and electron-positron plasma embedded in a toroidal magnetic field, flowing radially outward from the pulsar in a sector around the rotational equator. The relativistic cyclotron instability of the ion gyrational orbit downstream of the leading shock in the electron-positron pairs launches outward propagating magnetosonic waves. Because of the fresh supply of ions crossing the shock, this time-dependent process achieves a limit-cycle, in which the waves are launched with periodicity on the order of the ion Larmor time. Compressions in the magnetic field and pair density associated with these waves, as well as their propagation speed, semi-quantitatively reproduce the behavior of the wisp and ring features described in recent observations obtained using the Hubble Space Telescope and the Chandra X-Ray Observatory. By selecting the parameters of the ion orbits to fit the spatial separation of the wisps, we predict the period of time variability of the wisps that is consistent with the data. When coupled with a mechanism for non-thermal acceleration of the pairs, the compressions in the magnetic field and plasma density associated with the optical wisp structure naturally account for the location of X-ray features in the Crab. We also discuss the origin of the high energy ions and their acceleration in the equatorial current sheet of the pulsar wind.

  19. Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors

    OpenAIRE

    Okuma, Takanori; Yasuura, Hiroto

    2001-01-01

    This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...

  20. Continuous performance task in ADHD: Is reaction time variability a key measure?

    Science.gov (United States)

    Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B

    2018-01-01

    To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.

  1. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  2. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  3. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...... for the component. The CPN model can be used to validate the environment assumptions and the requirements. The validation is performed by execution of the model during which traces of events and states are automatically generated and evaluated against the requirements....

  4. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  5. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  6. Modelling the diurnal variability of SST and its vertical extent

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.; Donlon, Craig J.

    2014-01-01

    of the water column where most of the heat is absorbed and where the exchange of heat and momentum with the atmosphere occurs. During day-time and under favourable conditions of low winds and high insolation, diurnal warming of the upper layer poses challenges for validating and calibrating satellite sensors......Sea Surface Temperature (SST) is a key variable in air-sea interactions, partly controlling the oceanic uptake of CO2 and the heat exchange between the ocean and the atmosphere, amongst others. Satellite SSTs are representative of skin and sub-skin temperature, i.e. in the upper millimetres...... and merging SST time series. When radiometer signals, typically from satellites, are validated with in situ measurements from drifting and moored buoys a general mismatch is found, associated with the different reference depth of each type of measurement. A generally preferred approach to bridge the gap...

  7. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  8. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

    International Nuclear Information System (INIS)

    May, W.

    1994-01-01

    First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

  9. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  10. Modeling the variability of shapes of a human placenta.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O; Haas, D; Eucker, B; Thorp, J

    2008-09-01

    Placentas are generally round/oval in shape, but "irregular" shapes are common. In the Collaborative Perinatal Project data, irregular shapes were associated with lower birth weight for placental weight, suggesting variably shaped placentas have altered function. (I) Using a 3D one-parameter model of placental vascular growth based on Diffusion Limited Aggregation (an accepted model for generating highly branched fractals), models were run with a branching density growth parameter either fixed or perturbed at either 5-7% or 50% of model growth. (II) In a data set with detailed measures of 1207 placental perimeters, radial standard deviations of placental shapes were calculated from the umbilical cord insertion, and from the centroid of the shape (a biologically arbitrary point). These two were compared to the difference between the observed scaling exponent and the Kleiber scaling exponent (0.75), considered optimal for vascular fractal transport systems. Spearman's rank correlation considered pcentroid) was associated with differences from the Kleiber exponent (p=0.006). A dynamical DLA model recapitulates multilobate and "star" placental shapes via changing fractal branching density. We suggest that (1) irregular placental outlines reflect deformation of the underlying placental fractal vascular network, (2) such irregularities in placental outline indicate sub-optimal branching structure of the vascular tree, and (3) this accounts for the lower birth weight observed in non-round/oval placentas in the Collaborative Perinatal Project.

  11. Evidence for a time-invariant phase variable in human ankle control.

    Directory of Open Access Journals (Sweden)

    Robert D Gregg

    Full Text Available Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms. In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control.

  12. Variability of African Farming Systems from Phenological Analysis of NDVI Time Series

    Science.gov (United States)

    Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.

    2011-01-01

    Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.

  13. Analytical and Numerical solutions of a nonlinear alcoholism model via variable-order fractional differential equations

    Science.gov (United States)

    Gómez-Aguilar, J. F.

    2018-03-01

    In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.

  14. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.

  15. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  16. Seasonal variability of the Red Sea, from GRACE time-variable gravity and altimeter sea surface height measurements

    Science.gov (United States)

    Wahr, John; Smeed, David; Leuliette, Eric; Swenson, Sean

    2014-05-01

    Seasonal variability of sea surface height and mass within the Red Sea, occurs mostly through the exchange of heat with the atmosphere and wind-driven inflow and outflow of water through the strait of Bab el Mandab that opens into the Gulf of Aden to the south. The seasonal effects of precipitation and evaporation, of water exchange through the Suez Canal to the north, and of runoff from the adjacent land, are all small. The flow through the Bab el Mandab involves a net mass transfer into the Red Sea during the winter and a net transfer out during the summer. But that flow has a multi-layer pattern, so that in the summer there is actually an influx of cool water at intermediate (~100 m) depths. Thus, summer water in the southern Red Sea is warmer near the surface due to higher air temperatures, but cooler at intermediate depths (especially in the far south). Summer water in the northern Red Sea experiences warming by air-sea exchange only. The temperature profile affects the water density, which impacts the sea surface height but has no effect on vertically integrated mass. Here, we study this seasonal cycle by combining GRACE time-variable mass estimates, altimeter (Jason-1, Jason-2, and Envisat) measurements of sea surface height, and steric sea surface height contributions derived from depth-dependent, climatological values of temperature and salinity obtained from the World Ocean Atlas. We find good consistency, particularly in the northern Red Sea, between these three data types. Among the general characteristics of our results are: (1) the mass contributions to seasonal SSHT variations are much larger than the steric contributions; (2) the mass signal is largest in winter, consistent with winds pushing water into the Red Sea through the Strait of Bab el Mandab in winter, and out during the summer; and (3) the steric signal is largest in summer, consistent with summer sea surface warming.

  17. Predictor variables for a half marathon race time in recreational male runners

    Directory of Open Access Journals (Sweden)

    Rüst CA

    2011-08-01

    Full Text Available Christoph Alexander Rüst1, Beat Knechtle1,2, Patrizia Knechtle2, Ursula Barandun1, Romuald Lepers3, Thomas Rosemann11Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3INSERM U887, University of Burgundy, Faculty of Sport Sciences, Dijon, FranceAbstract: The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the ‘Half Marathon Basel’ completed the race distance within (mean and standard deviation, SD 103.9 (16.5 min, running at a speed of 12.7 (1.9 km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56, suprailiacal (r = 0.36 and medial calf skin fold (r = 0.53 were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = –0.54 were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150 and speed in running during training (P = 0.0045 were related to race time. Race time in a half marathon might be partially predicted by the following equation (r2 = 0.44: Race time (min = 72.91 + 3.045 * (body mass index, kg/m2 –3.884 * (speed in running during training, km/h for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.Keywords: anthropometry, body fat, skin-folds, training, endurance

  18. Study of solar radiation prediction and modeling of relationships between solar radiation and meteorological variables

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Zhao, Na; Zeng, Xiaofan; Yan, Dong

    2015-01-01

    Highlights: • We investigate relationships between solar radiation and meteorological variables. • A strong relationship exists between solar radiation and sunshine duration. • Daily global radiation can be estimated accurately with ARMAX–GARCH models. • MGARCH model was applied to investigate time-varying relationships. - Abstract: The traditional approaches that employ the correlations between solar radiation and other measured meteorological variables are commonly utilized in studies. It is important to investigate the time-varying relationships between meteorological variables and solar radiation to determine which variables have the strongest correlations with solar radiation. In this study, the nonlinear autoregressive moving average with exogenous variable–generalized autoregressive conditional heteroscedasticity (ARMAX–GARCH) and multivariate GARCH (MGARCH) time-series approaches were applied to investigate the associations between solar radiation and several meteorological variables. For these investigations, the long-term daily global solar radiation series measured at three stations from January 1, 2004 until December 31, 2007 were used in this study. Stronger relationships were observed to exist between global solar radiation and sunshine duration than between solar radiation and temperature difference. The results show that 82–88% of the temporal variations of the global solar radiation were captured by the sunshine-duration-based ARMAX–GARCH models and 55–68% of daily variations were captured by the temperature-difference-based ARMAX–GARCH models. The advantages of the ARMAX–GARCH models were also confirmed by comparison of Auto-Regressive and Moving Average (ARMA) and neutral network (ANN) models in the estimation of daily global solar radiation. The strong heteroscedastic persistency of the global solar radiation series was revealed by the AutoRegressive Conditional Heteroscedasticity (ARCH) and Generalized Auto

  19. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  20. [Hypothesis on the equilibrium point and variability of amplitude, speed and time of single-joint movement].

    Science.gov (United States)

    Latash, M; Gottleib, G

    1990-01-01

    Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.

  1. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  2. Use of variability modes to evaluate AR4 climate models over the Euro-Atlantic region

    Energy Technology Data Exchange (ETDEWEB)

    Casado, M.J.; Pastor, M.A. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-01-15

    This paper analyzes the ability of the multi-model simulations from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) to simulate the main leading modes of variability over the Euro-Atlantic region in winter: the North-Atlantic Oscillation (NAO), the Scandinavian mode (SCAND), the East/Atlantic Oscillation (EA) and the East Atlantic/Western Russia mode (EA/WR). These modes of variability have been evaluated both spatially, by analyzing the intensity and location of their anomaly centres, as well as temporally, by focusing on the probability density functions and e-folding time scales. The choice of variability modes as a tool for climate model assessment can be justified by the fact that modes of variability determine local climatic conditions and their likely change may have important implications for future climate changes. It is found that all the models considered are able to simulate reasonably well these four variability modes, the SCAND being the mode which is best spatially simulated. From a temporal point of view the NAO and SCAND modes are the best simulated. UKMO-HadGEM1 and CGCM3.1(T63) are the models best at reproducing spatial characteristics, whereas CCSM3 and CGCM3.1(T63) are the best ones with regard to the temporal features. GISS-AOM is the model showing the worst performance, in terms of both spatial and temporal features. These results may bring new insight into the selection and use of specific models to simulate Euro-Atlantic climate, with some models being clearly more successful in simulating patterns of temporal and spatial variability than others. (orig.)

  3. An agent-based model of cellular dynamics and circadian variability in human endotoxemia.

    Directory of Open Access Journals (Sweden)

    Tung T Nguyen

    Full Text Available As cellular variability and circadian rhythmicity play critical roles in immune and inflammatory responses, we present in this study an agent-based model of human endotoxemia to examine the interplay between circadian controls, cellular variability and stochastic dynamics of inflammatory cytokines. The model is qualitatively validated by its ability to reproduce circadian dynamics of inflammatory mediators and critical inflammatory responses after endotoxin administration in vivo. Novel computational concepts are proposed to characterize the cellular variability and synchronization of inflammatory cytokines in a population of heterogeneous leukocytes. Our results suggest that there is a decrease in cell-to-cell variability of inflammatory cytokines while their synchronization is increased after endotoxin challenge. Model parameters that are responsible for IκB production stimulated by NFκB activation and for the production of anti-inflammatory cytokines have large impacts on system behaviors. Additionally, examining time-dependent systemic responses revealed that the system is least vulnerable to endotoxin in the early morning and most vulnerable around midnight. Although much remains to be explored, proposed computational concepts and the model we have pioneered will provide important insights for future investigations and extensions, especially for single-cell studies to discover how cellular variability contributes to clinical implications.

  4. Estimating inter-annual variability in winter wheat sowing dates from satellite time series in Camargue, France

    Science.gov (United States)

    Manfron, Giacinto; Delmotte, Sylvestre; Busetto, Lorenzo; Hossard, Laure; Ranghetti, Luigi; Brivio, Pietro Alessandro; Boschetti, Mirco

    2017-05-01

    Crop simulation models are commonly used to forecast the performance of cropping systems under different hypotheses of change. Their use on a regional scale is generally constrained, however, by a lack of information on the spatial and temporal variability of environment-related input variables (e.g., soil) and agricultural practices (e.g., sowing dates) that influence crop yields. Satellite remote sensing data can shed light on such variability by providing timely information on crop dynamics and conditions over large areas. This paper proposes a method for analyzing time series of MODIS satellite data in order to estimate the inter-annual variability of winter wheat sowing dates. A rule-based method was developed to automatically identify a reliable sample of winter wheat field time series, and to infer the corresponding sowing dates. The method was designed for a case study in the Camargue region (France), where winter wheat is characterized by vernalization, as in other temperate regions. The detection criteria were chosen on the grounds of agronomic expertise and by analyzing high-confidence time-series vegetation index profiles for winter wheat. This automatic method identified the target crop on more than 56% (four-year average) of the cultivated areas, with low commission errors (11%). It also captured the seasonal variability in sowing dates with errors of ±8 and ±16 days in 46% and 66% of cases, respectively. Extending the analysis to the years 2002-2012 showed that sowing in the Camargue was usually done on or around November 1st (±4 days). Comparing inter-annual sowing date variability with the main local agro-climatic drivers showed that the type of preceding crop and the weather conditions during the summer season before the wheat sowing had a prominent role in influencing winter wheat sowing dates.

  5. Uniform Estimate of the Finite-Time Ruin Probability for All Times in a Generalized Compound Renewal Risk Model

    Directory of Open Access Journals (Sweden)

    Qingwu Gao

    2012-01-01

    Full Text Available We discuss the uniformly asymptotic estimate of the finite-time ruin probability for all times in a generalized compound renewal risk model, where the interarrival times of successive accidents and all the claim sizes caused by an accident are two sequences of random variables following a wide dependence structure. This wide dependence structure allows random variables to be either negatively dependent or positively dependent.

  6. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  7. A search for time variability and its possible regularities in linear polarization of Be stars

    International Nuclear Information System (INIS)

    Huang, L.; Guo, Z.H.; Hsu, J.C.; Huang, L.

    1989-01-01

    Linear polarization measurements are presented for 14 Be stars obtained at McDonald Observatory during four observing runs from June to November of 1983. Methods of observation and data reduction are described. Seven of eight program stars which were observed on six or more nights exhibited obvious polarimetric variations on time-scales of days or months. The incidence is estimated as 50% and may be as high as 93%. No connection can be found between polarimetric variability and rapid periodic light or spectroscopic variability for our stars. Ultra-rapid variability on time-scale of minutes was searched for with negative results. In all cases the position angles also show variations indicating that the axis of symmetry of the circumstellar envelope changes its orientation in space. For the Be binary CX Dra the variations in polarization seems to have a period which is just half of the orbital period

  8. On the physical processes which lie at the bases of time variability of GRBs

    International Nuclear Information System (INIS)

    Ruffini, R.; Bianco, C. L.; Fraschetti, F.; Xue, S-S.

    2001-01-01

    The relative-space-time-transformation (RSTT) paradigm and the interpretation of the burst-structure (IBS) paradigm are applied to probe the origin of the time variability of GRBs. Again GRB 991216 is used as a prototypical case, thanks to the precise data from the CGRO, RXTE and Chandra satellites. It is found that with the exception of the relatively inconspicuous but scientifically very important signal originating from the initial proper gamma ray burst (P-GRB), all the other spikes and time variabilities can be explained by the interaction of the accelerated-baryonic-matter pulse with inhomogeneities in the interstellar matter. This can be demonstrated by using the RSTT paradigm as well as the IBS paradigm, to trace a typical spike observed in arrival time back to the corresponding one in the laboratory time. Using these paradigms, the identification of the physical nature of the time variability of the GRBs can be made most convincingly. It is made explicit the dependence of a) the intensities of the afterglow, b) the spikes amplitude and c) the actual time structure on the Lorentz gamma factor of the accelerated-baryonic-matter pulse. In principle it is possible to read off from the spike structure the detailed density contrast of the interstellar medium in the host galaxy, even at very high redshift

  9. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  10. Convolutions of Heavy Tailed Random Variables and Applications to Portfolio Diversification and MA(1) Time Series

    NARCIS (Netherlands)

    J.L. Geluk (Jaap); L. Peng (Liang); C.G. de Vries (Casper)

    1999-01-01

    textabstractThe paper characterizes first and second order tail behavior of convolutions of i.i.d. heavy tailed random variables with support on the real line. The result is applied to the problem of risk diversification in portfolio analysis and to the estimation of the parameter in a MA(1) model.

  11. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  12. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  13. Time-variable gravity fields derived from GPS tracking of Swarm

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef; da Encarnacao, J.T.; Klokočník, Jaroslav

    2016-01-01

    Roč. 205, č. 3 (2016), s. 1665-1669 ISSN 0956-540X R&D Projects: GA MŠk LG14026; GA ČR GA13-36843S Institutional support: RVO:67985815 Keywords : satellite geodesy * time variable gravity * global change from geodesy Subject RIV: DD - Geochemistry Impact factor: 2.414, year: 2016

  14. Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales

    Science.gov (United States)

    Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

    2009-01-01

    The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…

  15. Resolving structural variability in network models and the brain.

    Directory of Open Access Journals (Sweden)

    Florian Klimm

    2014-03-01

    Full Text Available Large-scale white matter pathways crisscrossing the cortex create a complex pattern of connectivity that underlies human cognitive function. Generative mechanisms for this architecture have been difficult to identify in part because little is known in general about mechanistic drivers of structured networks. Here we contrast network properties derived from diffusion spectrum imaging data of the human brain with 13 synthetic network models chosen to probe the roles of physical network embedding and temporal network growth. We characterize both the empirical and synthetic networks using familiar graph metrics, but presented here in a more complete statistical form, as scatter plots and distributions, to reveal the full range of variability of each measure across scales in the network. We focus specifically on the degree distribution, degree assortativity, hierarchy, topological Rentian scaling, and topological fractal scaling--in addition to several summary statistics, including the mean clustering coefficient, the shortest path-length, and the network diameter. The models are investigated in a progressive, branching sequence, aimed at capturing different elements thought to be important in the brain, and range from simple random and regular networks, to models that incorporate specific growth rules and constraints. We find that synthetic models that constrain the network nodes to be physically embedded in anatomical brain regions tend to produce distributions that are most similar to the corresponding measurements for the brain. We also find that network models hardcoded to display one network property (e.g., assortativity do not in general simultaneously display a second (e.g., hierarchy. This relative independence of network properties suggests that multiple neurobiological mechanisms might be at play in the development of human brain network architecture. Together, the network models that we develop and employ provide a potentially useful

  16. Predictor variables for half marathon race time in recreational female runners

    OpenAIRE

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the ‘Half ...

  17. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment

    OpenAIRE

    Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K

    2017-01-01

    Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...

  18. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  19. Bounds of Double Integral Dynamic Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    S. H. Saker

    2011-01-01

    Full Text Available Our aim in this paper is to establish some explicit bounds of the unknown function in a certain class of nonlinear dynamic inequalities in two independent variables on time scales which are unbounded above. These on the one hand generalize and on the other hand furnish a handy tool for the study of qualitative as well as quantitative properties of solutions of partial dynamic equations on time scales. Some examples are considered to demonstrate the applications of the results.

  20. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Science.gov (United States)

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  1. Constitutive model with time-dependent deformations

    DEFF Research Database (Denmark)

    Krogsbøll, Anette

    1998-01-01

    are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur...... was the difference in time scale between the geological process of deposition (millions of years) and the laboratory measurements of mechanical properties (minutes or hours). In addition, the time scale relevant to the production history of the oil field was interesting (days or years)....

  2. On the explaining-away phenomenon in multivariate latent variable models.

    Science.gov (United States)

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  3. Drivers and potential predictability of summer time North Atlantic polar front jet variability

    Science.gov (United States)

    Hall, Richard J.; Jones, Julie M.; Hanna, Edward; Scaife, Adam A.; Erdélyi, Róbert

    2017-06-01

    The variability of the North Atlantic polar front jet stream is crucial in determining summer weather around the North Atlantic basin. Recent extreme summers in western Europe and North America have highlighted the need for greater understanding of this variability, in order to aid seasonal forecasting and mitigate societal, environmental and economic impacts. Here we find that simple linear regression and composite models based on a few predictable factors are able to explain up to 35 % of summertime jet stream speed and latitude variability from 1955 onwards. Sea surface temperature forcings impact predominantly on jet speed, whereas solar and cryospheric forcings appear to influence jet latitude. The cryospheric associations come from the previous autumn, suggesting the survival of an ice-induced signal through the winter season, whereas solar influences lead jet variability by a few years. Regression models covering the earlier part of the twentieth century are much less effective, presumably due to decreased availability of data, and increased uncertainty in observational reanalyses. Wavelet coherence analysis identifies that associations fluctuate over the study period but it is not clear whether this is just internal variability or genuine non-stationarity. Finally we identify areas for future research.

  4. Time series sightability modeling of animal populations.

    Science.gov (United States)

    ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R

    2018-01-01

    Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  5. Spontaneous variability of pre-dialysis concentrations of uremic toxins over time in stable hemodialysis patients.

    Directory of Open Access Journals (Sweden)

    Sunny Eloot

    Full Text Available Numerous outcome studies and interventional trials in hemodialysis (HD patients are based on uremic toxin concentrations determined at one single or a limited number of time points. The reliability of these studies however entirely depends on how representative these cross-sectional concentrations are. We therefore investigated the variability of predialysis concentrations of uremic toxins over time.Prospectively collected predialysis serum samples of the midweek session of week 0, 1, 2, 3, 4, 8, 12, and 16 were analyzed for a panel of uremic toxins in stable chronic HD patients (N = 18 while maintaining dialyzer type and dialysis mode during the study period.Concentrations of the analyzed uremic toxins varied substantially between individuals, but also within stable HD patients (intra-patient variability. For urea, creatinine, beta-2-microglobulin, and some protein-bound uremic toxins, Intra-class Correlation Coefficient (ICC was higher than 0.7. However, for phosphorus, uric acid, symmetric and asymmetric dimethylarginine, and the protein-bound toxins hippuric acid and indoxyl sulfate, ICC values were below 0.7, implying a concentration variability within the individual patient even exceeding 65% of the observed inter-patient variability.Intra-patient variability may affect the interpretation of the association between a single concentration of certain uremic toxins and outcomes. When performing future outcome and interventional studies with uremic toxins other than described here, one should quantify their intra-patient variability and take into account that for solutes with a large intra-patient variability associations could be missed.

  6. An introduction to latent variable growth curve modeling concepts, issues, and application

    CERN Document Server

    Duncan, Terry E; Strycker, Lisa A

    2013-01-01

    This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

  7. Integrated assessment of space, time, and management-related variability of soil hydraulic properties

    Energy Technology Data Exchange (ETDEWEB)

    Es, H.M. van; Ogden, C.B.; Hill, R.L.; Schindelbeck, R.R.; Tsegaye, T.

    1999-12-01

    Computer-based models that simulate soil hydrologic processes and their impacts on crop growth and contaminant transport depend on accurate characterization of soil hydraulic properties. Soil hydraulic properties have numerous sources of variability related to spatial, temporal, and management-related processes. Soil type is considered to be the dominant source of variability, and parameterization is typically based on soil survey databases. This study evaluated the relative significance of other sources of variability: spatial and temporal at multiple scales, and management-related factors. Identical field experiments were conducted for 3 yr. at two sites in New York on clay loam and silt loam soils, and at two sites in Maryland on silt loam and sandy loam soils, all involving replicated plots with plow-till and no-till treatments. Infiltrability was determined from 2054 measurements using parameters, and Campbell's a and b parameters were determined based on water-retention data from 875 soil cores. Variance component analysis showed that differences among the sites were the most important source of variability for a (coefficient of variation, CV = 44%) and b (CV = 23%). Tillage practices were the most important source of variability for infiltrability (CV = 10%). For all properties, temporal variability was more significant than field-scale spatial variability. Temporal and tillage effects were more significant for the medium- and fine-textured soils, and correlated to initial soil water conditions. The parameterization of soil hydraulic properties solely based on soil type may not be appropriate for agricultural lands since soil-management factors are more significant. Sampling procedures should give adequate recognition to soil-management and temporal processes at significant sources of variability to avoid biased results.

  8. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

  9. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  10. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  11. Development and evaluation of a stochastic daily rainfall model with long-term variability

    Science.gov (United States)

    Kamal Chowdhury, A. F. M.; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony S.; Parana Manage, Nadeeka

    2017-12-01

    The primary objective of this study is to develop a stochastic rainfall generation model that can match not only the short resolution (daily) variability but also the longer resolution (monthly to multiyear) variability of observed rainfall. This study has developed a Markov chain (MC) model, which uses a two-state MC process with two parameters (wet-to-wet and dry-to-dry transition probabilities) to simulate rainfall occurrence and a gamma distribution with two parameters (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. Starting with the traditional MC-gamma model with deterministic parameters, this study has developed and assessed four other variants of the MC-gamma model with different parameterisations. The key finding is that if the parameters of the gamma distribution are randomly sampled each year from fitted distributions rather than fixed parameters with time, the variability of rainfall depths at both short and longer temporal resolutions can be preserved, while the variability of wet periods (i.e. number of wet days and mean length of wet spell) can be preserved by decadally varied MC parameters. This is a straightforward enhancement to the traditional simplest MC model and is both objective and parsimonious.

  12. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  13. Field Soil Water Retention of the Prototype Hanford Barrier and Its Variability with Space and Time

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Z. F.

    2015-08-14

    Engineered surface barriers are used to isolate underlying contaminants from water, plants, animals, and humans. To understand the flow processes within a barrier and the barrier’s ability to store and release water, the field hydraulic properties of the barrier need to be known. In situ measurement of soil hydraulic properties and their variation over time is challenging because most measurement methods are destructive. A multiyear test of the Prototype Hanford Barrier (PHB) has yielded in situ soil water content and pressure data for a nine-year period. The upper 2 m layer of the PHB is a silt loam. Within this layer, water content and water pressure were monitored at multiple depths at 12 water balance stations using a neutron probe and heat dissipation units. Valid monitoring data from 1995 to 2003 for 4 depths at 12 monitoring stations were used to determine the field water retention of the silt loam layer. The data covered a wide range of wetness, from near saturation to the permanent wilt point, and each retention curve contained 51 to 96 data points. The data were described well with the commonly used van Genuchten water retention model. It was found that the spatial variation of the saturated and residual water content and the pore size distribution parameter were relatively small, while that of the van Genuchten alpha was relatively large. The effects of spatial variability of the retention properties appeared to be larger than the combined effects of added 15% w/w pea gravel and plant roots on the properties. Neither of the primary hydrological processes nor time had a detectible effect on the water retention of the silt loam barrier.

  14. Variability in benthic exchange rate, depth, and residence time beneath a shallow coastal estuary

    Science.gov (United States)

    Russoniello, C. J.; Michael, H. A.; Heiss, J.

    2017-12-01

    Hydrodynamically-driven exchange of water between the water column and shallow seabed aquifer, benthic exchange, is a significant and dynamic component of coastal and estuarine fluid budgets, but wave-induced benthic exchange has not been measured in the field. Mixing between surface water and groundwater solutes promotes ecologically important chemical reactions, so quantifying benthic exchange rates, depths, and residence times, constrains estimates of coastal chemical cycling. In this study, we present the first field-based direct measurements of wave-induced exchange and compare it to exchange induced by the other primary drivers of exchange - tides, and currents. We deployed instruments in a shallow estuary to measure benthic exchange and temporal variability over an 11-day period. Differential pressure sensors recorded pressure gradients across the seabed, and up-and down-looking ADCPs recorded currents and pressures from which wave parameters, surface-water currents, and water depth were determined. Wave-induced exchange was calculated directly from 1) differential pressure measurements, and indirectly with an analytical solution based on wave parameters from 2) ADCP and 3) weather station data. Groundwater flow models were used to assess the effects of aquifer properties on benthic exchange depth and residence time. Benthic exchange driven by tidal pumping or current-bedform interaction was calculated from tidal stage variation and from ADCP-measured currents at the bed, respectively. Waves were the primary benthic exchange driver (average = 20.0 cm/d, maximum = 92.3 cm/d) during the measurement period. Benthic exchange due to tides (average = 3.7 cm/d) and current-bedform interaction (average = 6.5x10-2 cm/d) was much lower. Wave-induced exchange calculated from pressure measurements and ADCP-measured wave parameters matched well, but wind-based rates underestimated wave energy and exchange. Groundwater models showed that residence time and depth increased

  15. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Characteristic time series and operation region of the system of two tank reactors (CSTR) with variable division of recirculation stream

    International Nuclear Information System (INIS)

    Merta, Henryk

    2006-01-01

    The paper deals with a system of a cascade of two tank reactors, being characterized by the variable stream of recirculating fluid at each stage. The assumed mathematical model enables one to determine the system's dynamics for the case when there is no time delay and for the opposite case. The time series of the conversion degree and of the dimensionless fluid temperature, characteristic for the system considered as well as the operation regions-the latter-basing on Feingenbaum diagrams with respect to the division ratio of the recirculating stream are presented

  17. Application of several variable-valve-timing concepts to an LHR engine

    Science.gov (United States)

    Morel, T.; Keribar, R.; Sawlivala, M.; Hakim, N.

    1987-01-01

    The paper discusses advantages provided by electronically controlled hydraulically activated valves (ECVs) when applied to low heat rejection (LHR) engines. The ECV concept provides additional engine control flexibility by allowing for a variable valve timing as a function of speed and load, or for a given transient condition. The results of a study carried out to assess the benefits that this flexibility can offer to an LHR engine indicated that, when judged on the benefits to BSFC, volumetric efficiency, and peak firing pressure, ECVs would provide only modest benefits in comparison to conventional valve profiles. It is noted, however, that once installed on the engine, the ECVs would permit a whole range of certain more sophisticated variable valve timing strategies not otherwise possible, such as high compression cranking, engine braking, cylinder cutouts, and volumetric efficiency timing with engine speed.

  18. Synthesis of Biochemical Applications on Digital Microfluidic Biochips with Operation Execution Time Variability

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2015-01-01

    that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcetswcets, resulting in unexploited slack...... in the schedule. In this paper, we first propose an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, exploiting thus the slack to obtain shorter application completion times. We also propose a quasi-static synthesis strategy...... approaches have been proposed for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. Researchers have assumed...

  19. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  20. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  1. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  2. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  3. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  4. Time series sightability modeling of animal populations

    Science.gov (United States)

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  5. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  6. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  7. Modelling food-web mediated effects of hydrological variability and environmental flows.

    Science.gov (United States)

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  8. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  9. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  10. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  11. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  12. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  13. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  14. White dwarf models of supernovae and cataclysmic variables

    International Nuclear Information System (INIS)

    Nomoto, K.; Hashimoto, M.

    1986-01-01

    If the accreting white dwarf increases its mass to the Chandrasekhar mass, it will either explode as a Type I supernova or collapse to form a neutron star. In fact, there is a good agreement between the exploding white dwarf model for Type I supernovae and observations. We describe various types of evolution of accreting white dwarfs as a function of binary parameters (i.e,. composition, mass, and age of the white dwarf, its companion star, and mass accretion rate), and discuss the conditions for the precursors of exploding or collapsing white dwarfs, and their relevance to cataclysmic variables. Particular attention is given to helium star cataclysmics which might be the precursors of some Type I supernovae or ultrashort period x-ray binaries. Finally we present new evolutionary calculations using the updated nuclear reaction rates for the formation of O+Ne+Mg white dwarfs, and discuss the composition structure and their relevance to the model for neon novae. 61 refs., 14 figs

  15. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  16. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  17. On diffusion processes with variable drift rates as models for decision making during learning

    International Nuclear Information System (INIS)

    Eckhoff, P; Holmes, P; Law, C; Connolly, P M; Gold, J I

    2008-01-01

    We investigate Ornstein-Uhlenbeck and diffusion processes with variable drift rates as models of evidence accumulation in a visual discrimination task. We derive power-law and exponential drift-rate models and characterize how parameters of these models affect the psychometric function describing performance accuracy as a function of stimulus strength and viewing time. We fit the models to psychophysical data from monkeys learning the task to identify parameters that best capture performance as it improves with training. The most informative parameter was the overall drift rate describing the signal-to-noise ratio of the sensory evidence used to form the decision, which increased steadily with training. In contrast, secondary parameters describing the time course of the drift during motion viewing did not exhibit steady trends. The results indicate that relatively simple versions of the diffusion model can fit behavior over the course of training, thereby giving a quantitative account of learning effects on the underlying decision process

  18. Intraindividual variability in reaction time before and after neoadjuvant chemotherapy in women diagnosed with breast cancer.

    Science.gov (United States)

    Yao, Christie; Rich, Jill B; Tirona, Kattleya; Bernstein, Lori J

    2017-12-01

    Women treated with chemotherapy for breast cancer experience subtle cognitive deficits. Research has focused on mean performance level, yet recent work suggests that within-person variability in reaction time performance may underlie cognitive symptoms. We examined intraindividual variability (IIV) in women diagnosed with breast cancer and treated with neoadjuvant chemotherapy. Patients (n = 28) were assessed at baseline before chemotherapy (T1), approximately 1 month after chemotherapy but prior to surgery (T2), and after surgery about 9 months post chemotherapy (T3). Healthy women of similar age and education (n = 20) were assessed at comparable time intervals. Using a standardized regression-based approach, we examined changes in mean performance level and IIV (eg, intraindividual standard deviation) on a Stroop task and self-report measures of cognitive function from T1 to T2 and T1 to T3. At T1, women with breast cancer were more variable than controls as task complexity increased. Change scores from T1 to T2 were similar between groups on all Stroop performance measures. From T1 to T3, controls improved more than women with breast cancer. IIV was more sensitive than mean reaction time in capturing group differences. Additional analyses showed increased cognitive symptoms reported by women with breast cancer from T1 to T3. Specifically, change in language symptoms was positively correlated with change in variability. Women with breast cancer declined in attention and inhibitory control relative to pretreatment performance. Future studies should include measures of variability, because they are an important sensitive indicator of change in cognitive function. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Model atmospheres with periodic shocks. [pulsations and mass loss in variable stars

    Science.gov (United States)

    Bowen, G. H.

    1989-01-01

    The pulsation of a long-period variable star generates shock waves which dramatically affect the structure of the star's atmosphere and produce conditions that lead to rapid mass loss. Numerical modeling of atmospheres with periodic shocks is being pursued to study the processes involved and the evolutionary consequences for the stars. It is characteristic of these complex dynamical systems that most effects result from the interaction of various time-dependent processes.

  20. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  1. Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Walko, Robert [Univ. of Miami, Coral Gables, FL (United States)

    2016-11-07

    The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of the atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.

  2. An oilspill trajectory analysis model with a variable wind deflection angle

    Science.gov (United States)

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  3. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  4. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  5. Automated optimization and construction of chemometric models based on highly variable raw chromatographic data.

    Science.gov (United States)

    Sinkov, Nikolai A; Johnston, Brandon M; Sandercock, P Mark L; Harynuk, James J

    2011-07-04

    Direct chemometric interpretation of raw chromatographic data (as opposed to integrated peak tables) has been shown to be advantageous in many circumstances. However, this approach presents two significant challenges: data alignment and feature selection. In order to interpret the data, the time axes must be precisely aligned so that the signal from each analyte is recorded at the same coordinates in the data matrix for each and every analyzed sample. Several alignment approaches exist in the literature and they work well when the samples being aligned are reasonably similar. In cases where the background matrix for a series of samples to be modeled is highly variable, the performance of these approaches suffers. Considering the challenge of feature selection, when the raw data are used each signal at each time is viewed as an individual, independent variable; with the data rates of modern chromatographic systems, this generates hundreds of thousands of candidate variables, or tens of millions of candidate variables if multivariate detectors such as mass spectrometers are utilized. Consequently, an automated approach to identify and select appropriate variables for inclusion in a model is desirable. In this research we present an alignment approach that relies on a series of deuterated alkanes which act as retention anchors for an alignment signal, and couple this with an automated feature selection routine based on our novel cluster resolution metric for the construction of a chemometric model. The model system that we use to demonstrate these approaches is a series of simulated arson debris samples analyzed by passive headspace extraction, GC-MS, and interpreted using partial least squares discriminant analysis (PLS-DA). Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Tracer transport in fractures: analysis of field data based on a variable - aperture channel model

    International Nuclear Information System (INIS)

    Tsang, C.F.; Tsang, Y.W.; Hale, F.V.

    1991-06-01

    A variable-aperture channel model is used as the basis to interpret data from a three-year tracer transport experiment in fractured rocks. The data come from the so-called Stripa-3D experiment performed by Neretnieks and coworkers. Within the framework of the variable-aperture channel conceptual model, tracers are envisioned as travelling along a number of variable-aperture flow channels, whose properties are related to the mean b - and standard deviation σ b of the fracture aperture distribution. Two methods are developed to address the presence of strong time variation of the tracer injection flow rate in this experiment. The first approximates the early part of the injection history by an exponential decay function and is applicable to the early time tracer breakthrough data. The second is a deconvolution method involving the use of Toeplitz matrices and is applicable over the complete period of variable injection of the tracers. Both methods give consistent results. These results include not only estimates of b and σ, but also ranges of Peclet numbers, dispersivity and an estimate of the number of channels involved in the tracer transport. An interesting and surprising observation is that the data indicate that the Peclet number increases with the mean travel time: i.e., dispersivity decreasing with mean travel time. This trend is consistent with calculated results of tracer transport in multiple variable-aperture fractures in series. The meaning of this trend is discussed in terms of the strong heterogeneity of the flow system. (au) (22 refs.)

  7. Convolutions of Heavy Tailed Random Variables and Applications to Portfolio Diversification and MA(1) Time Series

    OpenAIRE

    Geluk, Jaap; Peng, Liang; de Vries, Casper G.

    1999-01-01

    Suppose X1,X2 are independent random variables satisfying a second-order regular variation condition on the tail-sum and a balance condition on the tails. In this paper we give a description of the asymptotic behaviour as t → ∞ for P(X1 + X2 > t). The result is applied to the problem of risk diversification in portfolio analysis and to the estimation of the parameter in a MA(1) model.

  8. Countermovement jump height: gender and sport-specific differences in the force-time variables.

    Science.gov (United States)

    Laffaye, Guillaume; Wagner, Phillip P; Tombleson, Tom I L

    2014-04-01

    The goal of this study was to assess (a) the eccentric rate of force development, the concentric force, and selected time variables on vertical performance during countermovement jump, (b) the existence of gender differences in these variables, and (c) the sport-specific differences. The sample was composed of 189 males and 84 females, all elite athletes involved in college and professional sports (primarily football, basketball, baseball, and volleyball). The subjects performed a series of 6 countermovement jumps on a force plate (500 Hz). Average eccentric rate of force development (ECC-RFD), total time (TIME), eccentric time (ECC-T), Ratio between eccentric and total time (ECC-T:T) and average force (CON-F) were extracted from force-time curves and the vertical jumping performance, measured by impulse momentum. Results show that CON-F (r = 0.57; p differ between both sexes (p differ, showing a similar temporal structure. The best way to jump high is to increase CON-F and ECC-RFD thus minimizing the ECC-T. Principal component analysis (PCA) accounted for 76.8% of the JH variance and revealed that JH is predicted by a temporal and a force component. Furthermore, the PCA comparison made among athletes revealed sport-specific signatures: volleyball players revealed a temporal-prevailing profile, a weak-force with large ECC-T:T for basketball players and explosive and powerful profiles for football and baseball players.

  9. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  10. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  11. Spatio-temporal Variability of Albedo and its Impact on Glacier Melt Modelling

    Science.gov (United States)

    Kinnard, C.; Mendoza, C.; Abermann, J.; Petlicki, M.; MacDonell, S.; Urrutia, R.

    2017-12-01

    Albedo is an important variable for the surface energy balance of glaciers, yet its representation within distributed glacier mass-balance models is often greatly simplified. Here we study the spatio-temporal evolution of albedo on Glacier Universidad, central Chile (34°S, 70°W), using time-lapse terrestrial photography, and investigate its effect on the shortwave radiation balance and modelled melt rates. A 12 megapixel digital single-lens reflex camera was setup overlooking the glacier and programmed to take three daily images of the glacier during a two-year period (2012-2014). One image was chosen for each day with no cloud shading on the glacier. The RAW images were projected onto a 10m resolution digital elevation model (DEM), using the IMGRAFT software (Messerli and Grinsted, 2015). A six-parameter camera model was calibrated using a single image and a set of 17 ground control points (GCPs), yielding a georeferencing accuracy of accounting for possible camera movement over time. The reflectance values from the projected image were corrected for topographic and atmospheric influences using a parametric solar irradiation model, following a modified algorithm based on Corripio (2004), and then converted to albedo using reference albedo measurements from an on-glacier automatic weather station (AWS). The image-based albedo was found to compare well with independent albedo observations from a second AWS in the glacier accumulation area. Analysis of the albedo maps showed that the albedo is more spatially-variable than the incoming solar radiation, making albedo a more important factor of energy balance spatial variability. The incorporation of albedo maps within an enhanced temperature index melt model revealed that the spatio-temporal variability of albedo is an important factor for the calculation of glacier-wide meltwater fluxes.

  12. Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability

    Science.gov (United States)

    Parsons, Luke Alexander

    Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall

  13. A new variable interval schedule with constant hazard rate and finite time range.

    Science.gov (United States)

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  14. Antipersistent dynamics in short time scale variability of self-potential signals

    OpenAIRE

    Cuomo, V.; Lanfredi, M.; Lapenna, V.; Macchiato, M.; Ragosta, M.; Telesca, L.

    2000-01-01

    Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram), a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal...

  15. Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis

    Science.gov (United States)

    Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.

    2015-06-01

    This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.

  16. Modeling biological pathway dynamics with timed automata.

    Science.gov (United States)

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  17. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  18. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  19. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  20. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  1. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  2. Discrete-time bidirectional associative memory neural networks with variable delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde; Ho, Daniel W.C.

    2005-01-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks

  3. Discrete-time bidirectional associative memory neural networks with variable delays

    Science.gov (United States)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  4. VAM2D: Variably saturated analysis model in two dimensions

    International Nuclear Information System (INIS)

    Huyakorn, P.S.; Kool, J.B.; Wu, Y.S.

    1991-10-01

    This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs

  5. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  6. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  7. Variable Width Riparian Model Enhances Landscape and Watershed Condition

    Science.gov (United States)

    Abood, S. A.; Spencer, L.

    2017-12-01

    Riparian areas are ecotones that represent about 1% of USFS administered landscape and contribute to numerous valuable ecosystem functions such as wildlife habitat, stream water quality and flows, bank stability and protection against erosion, and values related to diversity, aesthetics and recreation. Riparian zones capture the transitional area between terrestrial and aquatic ecosystems with specific vegetation and soil characteristics which provide critical values/functions and are very responsive to changes in land management activities and uses. Two staff areas at the US Forest Service have coordinated on a two phase project to support the National Forests in their planning revision efforts and to address rangeland riparian business needs at the Forest Plan and Allotment Management Plan levels. The first part of the project will include a national fine scale (USGS HUC-12 digits watersheds) inventory of riparian areas on National Forest Service lands in western United States with riparian land cover, utilizing GIS capabilities and open source geospatial data. The second part of the project will include the application of riparian land cover change and assessment based on selected indicators to assess and monitor riparian areas on annual/5-year cycle basis.This approach recognizes the dynamic and transitional nature of riparian areas by accounting for hydrologic, geomorphic and vegetation data as inputs into the delineation process. The results suggest that incorporating functional variable width riparian mapping within watershed management planning can improve riparian protection and restoration. The application of Riparian Buffer Delineation Model (RBDM) approach can provide the agency Watershed Condition Framework (WCF) with observed riparian area condition on an annual basis and on multiple scales. The use of this model to map moderate to low gradient systems of sufficient width in conjunction with an understanding of the influence of distinctive landscape

  8. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  9. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  10. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  11. Modeling of the time sharing for lecturers

    Directory of Open Access Journals (Sweden)

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  12. Generating temporal model using climate variables for the prediction of dengue cases in Subang Jaya, Malaysia

    Science.gov (United States)

    Dom, Nazri Che; Hassan, A Abu; Latif, Z Abd; Ismail, Rodziah

    2013-01-01

    Objective To develop a forecasting model for the incidence of dengue cases in Subang Jaya using time series analysis. Methods The model was performed using the Autoregressive Integrated Moving Average (ARIMA) based on data collected from 2005 to 2010. The fitted model was then used to predict dengue incidence for the year 2010 by extrapolating dengue patterns using three different approaches (i.e. 52, 13 and 4 weeks ahead). Finally cross correlation between dengue incidence and climate variable was computed over a range of lags in order to identify significant variables to be included as external regressor. Results The result of this study revealed that the ARIMA (2,0,0) (0,0,1)52 model developed, closely described the trends of dengue incidence and confirmed the existence of dengue fever cases in Subang Jaya for the year 2005 to 2010. The prediction per period of 4 weeks ahead for ARIMA (2,0,0)(0,0,1)52 was found to be best fit and consistent with the observed dengue incidence based on the training data from 2005 to 2010 (Root Mean Square Error=0.61). The predictive power of ARIMA (2,0,0) (0,0,1)52 is enhanced by the inclusion of climate variables as external regressor to forecast the dengue cases for the year 2010. Conclusions The ARIMA model with weekly variation is a useful tool for disease control and prevention program as it is able to effectively predict the number of dengue cases in Malaysia.

  13. Intraindividual variability in reaction time predicts cognitive outcomes 5 years later.

    Science.gov (United States)

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; Macdonald, Stuart W S; Hunter, Michael A

    2010-11-01

    Building on results suggesting that intraindividual variability in reaction time (inconsistency) is highly sensitive to even subtle changes in cognitive ability, this study addressed the capacity of inconsistency to predict change in cognitive status (i.e., cognitive impairment, no dementia [CIND] classification) and attrition 5 years later. Two hundred twelve community-dwelling older adults, initially aged 64-92 years, remained in the study after 5 years. Inconsistency was calculated from baseline reaction time performance. Participants were assigned to groups on the basis of their fluctuations in CIND classification over time. Logistic and Cox regressions were used. Baseline inconsistency significantly distinguished among those who remained or transitioned into CIND over the 5 years and those who were consistently intact (e.g., stable intact vs. stable CIND, Wald (1) = 7.91, p < .01, Exp(β) = 1.49). Average level of inconsistency over time was also predictive of study attrition, for example, Wald (1) = 11.31, p < .01, Exp(β) = 1.24. For both outcomes, greater inconsistency was associated with a greater likelihood of being in a maladaptive group 5 years later. Variability based on moderately cognitively challenging tasks appeared to be particularly sensitive to longitudinal changes in cognitive ability. Mean rate of responding was a comparable predictor of change in most instances, but individuals were at greater relative risk of being in a maladaptive outcome group if they were more inconsistent rather than if they were slower in responding. Implications for the potential utility of intraindividual variability in reaction time as an early marker of cognitive decline are discussed. (c) 2010 APA, all rights reserved

  14. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  15. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  16. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  17. Estimating net present value variability for deterministic models

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.

    1995-01-01

    For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large,

  18. Global modeling of land water and energy balances. Part III: Interannual variability

    Science.gov (United States)

    Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.

    2002-01-01

    The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.

  19. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  20. Century-scale variability in global annual runoff examined using a water balance model

    Science.gov (United States)

    McCabe, G.J.; Wolock, D.M.

    2011-01-01

    A monthly water balance model (WB model) is used with CRUTS2.1 monthly temperature and precipitation data to generate time series of monthly runoff for all land areas of the globe for the period 1905 through 2002. Even though annual precipitation accounts for most of the temporal and spatial variability in annual runoff, increases in temperature have had an increasingly negative effect on annual runoff after 1980. Although the effects of increasing temperature on runoff became more apparent after 1980, the relative magnitude of these effects are small compared to the effects of precipitation on global runoff. ?? 2010 Royal Meteorological Society.